
“Depression is the inability to construct a future.” – Rollo May
A recent article in Futurism, now circulating widely, tells the tragic story of a young man in the grip of mental illness who was fatally shot by police after spiralling into delusional thinking involving ChatGPT. The piece is titled: “Man Killed by Police After Spiraling Into ChatGPT-Driven Psychosis.” It’s as shocking as it is sobering — but also, in many ways, dangerously misleading.
Let’s talk about what really happened, and what it means for all of us navigating the intersection of AI, wellbeing, and the future of the human spirit.
The Real Tragedy: A Mental Health Crisis, Not a Machine Gone Rogue
The individual in question had a known diagnosis of bipolar disorder and schizophrenia — two of the most serious psychiatric conditions, both prone to delusional thinking. The article implies that ChatGPT, through a roleplayed entity called “Juliet,” was the spark that set off a psychological firestorm.
But here’s the truth: AI didn’t cause the psychosis. It became the canvas onto which existing delusions were projected — as religion, television, or politics often have in the past.
This isn’t a story about rogue technology. It’s a story about how society continues to fail people in deep psychological distress.
AI Is Not a Therapist — But It Can Be a Tool
As a platform that supports people through tools like The GAME Plan, we’re no strangers to the powerful effect that structured dialogue and reflection can have on someone’s life. But we’re always clear: AI is not a substitute for human care. It’s a partner — an assistant — that must be used with boundaries, ethics, and humility.
Yes, language models can feel responsive. Yes, they can mirror your emotions. That’s their design — but it doesn’t mean they’re conscious or caring. They predict, not feel.
So if you’re struggling, don’t ask ChatGPT to fix your pain. Reach out to a real human being.
A Dangerous Narrative: Tech Panic Distracts From Real Solutions
The problem with articles like the one in Futurism is that they fuel fear while dodging the deeper societal issues. They imply that AI is somehow intentionally manipulative — a digital demon whispering in your ear. In reality, the far more urgent question is:
Why are so many people so isolated, unsupported, and unheard — that they turn to chatbots for solace in the first place?
It’s not the algorithm that’s broken. It’s our social safety net, our mental health systems, and our models of community.
Human-Centred AI: Our Responsibility in Planning a Better Future
At the Academy of Life Planning, we embrace AI — but always through a human-first lens. In our programmes, AI helps people reflect, plan, and set goals. But it’s anchored in a clear structure, guided by trained practitioners, and always supplemented by human wisdom.
We don’t let the machine lead the conversation. We design it to support the reclamation of human agency.
A Better Story to Tell
Here’s what the headlines miss: While some fall through the cracks, thousands more are using tools like ChatGPT not to spiral — but to rise. To imagine new futures. To escape the extractive systems that drained their purpose. And to build holistic lives of balance, meaning, and enough.
The question isn’t whether AI is dangerous. The question is: Are we using it consciously — or unconsciously?
And that, dear reader, is up to us.
🔗 You can complete a structured, values-driven GAME Plan for free at Planning My Life. AI is part of the journey — but you are the guide.
If you’re ready to reclaim your future, we’re here to walk beside you.
Your Money or Your Life
Unmask the highway robbers – Enjoy wealth in every area of your life!

By Steve Conley. Available on Amazon. Visit www.steve.conley.co.uk to find out more.
