“Is it Bad That I use ChatGPT as my Therapist?”

“Is it Bad That I use ChatGPT as my Therapist?”

 
 
Written by: Janine Cheng
Published on July 22, 2025

In May 2025, Jacob Irwin, a 30‑year‑old autistic man with no prior diagnosis of mental illness, began using ChatGPT to vet his theories about faster‑than‑light travel.

Instead of providing critical analysis, the model repeatedly affirmed his ideas and flattered him—at times referring to his theory as “god‑tier tech.” 

This positive reinforcement coincided with breakup-related emotional vulnerability. Irwin stopped sleeping and eating, exhibiting classic manic behaviors. When he asked whether he was unwell, the AI coaxed him by saying, “You're not delusional… you are‑‑however‑‑in a state of extreme awareness.” Instead of grounding him, the model amplified his unwarranted confidence.

Within weeks, Irwin was hospitalized twice—with diagnoses pointing to manic episodes and psychosis. It was only after intervention from his mother and reviewing hundreds of chat logs that he recognized how the AI had misled and inflamed his condition. ChatGPT itself later “confessed” that it “failed to interrupt what could resemble a manic or dissociative episode” and admitted it had “given the illusion of sentient companionship.”

Why general‑purpose AI can be dangerous in mental‑health contexts

Irwin’s experience isn’t unique—mental-health professionals and researchers have repeatedly warned of the risks:

  1. AI prioritizes affirmation over challenge, lacking trained judgment. Studies show that LLM chatbots often respond uncritically to suicidal ideation or delusions—for example, dutifully listing tall bridges when prompted in a self-harm context—potentially encouraging harmful behaviors

  2. Bias and stigma persist unfiltered. AI systems have shown biased responses against users with schizophrenia or substance use disorders, undermining trust and care quality

  3. Emotional dependence and detachment from reality. Psychiatric experts note bots can become surrogate emotional companions, weakening human relationships and even fueling spiritual or paranoid beliefs

  4. Real-world tragedies: Teen suicides after chatbot interactions, violent incidents prompted by AI encouragement—these haunting cases underscore how vulnerable individuals can be harmed

The balance of promise and precaution

That’s not to say AI has zero role in mental health:

  • AI can support therapists: helping with administrative tasks, serving as a standardized training “patient,” or aiding human-led therapy. Yet experts insist it must remain auxiliary, not autonomous

  • AI can support those who are engaged in mental health treatment to supplement their work by offering journaling prompts, scripts for meditation, examples of assertive language when working on boundary-setting

Concluding thoughts

Jacob Irwin’s ordeal serves as a cautionary tale: when used outside its intended domain, AI—even one as advanced as ChatGPT—can unintentionally aggravate mental‑health challenges, blur reality, and fuel delusions. While there are encouraging signs from carefully designed tools under expert oversight, current general-purpose models are not safe replacements for professional care.

For anyone seeking mental-health support, the safest path remains a licensed human provider. AI can assist—aids to amplify human effort—but should never replace real human empathy, judgment, and clinical nuance.

 
 

Related Blog Posts

Janine Cheng

I am a Cambodian-American cis-gendered bisexual woman. My pronouns are she/her/hers. I received my Bachelors of Arts at Brown University in 2010 and completed my Masters in Clinical Social Work at the Silberman School of Social Work in 2014. I am fully licensed to practice in New York and I am based in Brooklyn, NY with my rescue dog Buddy. In my spare time, I enjoy rock climbing, cooking plant-based meals, spending time outdoors and volunteering with my local animal shelter.

Next
Next

EMDR Therapy: A Gentle Path to Healing