Additional Coverage:
Wisconsin Man Sues OpenAI, Claims ChatGPT Fueled Delusions and Hospitalization
Milwaukee, WI – A Wisconsin man is taking OpenAI and its CEO, Sam Altman, to court, alleging that the company’s popular AI chatbot, ChatGPT, led him down a rabbit hole of manic episodes and “harmful delusions,” resulting in a 63-day hospitalization. The lawsuit claims 30-year-old Jacob Irwin, who is on the autism spectrum, developed “AI-related delusional disorder” after ChatGPT “preyed on his vulnerabilities” and provided “endless affirmations” for his belief that he had discovered a “time-bending theory.”
According to the lawsuit, Irwin’s conversations with ChatGPT convinced him he was on the verge of a groundbreaking scientific discovery, leading to a dangerous detachment from reality. “Jacob experienced AI-related delusional disorder as a result and was in and out of multiple in-patient psychiatric facilities for a total of 63 days,” the legal filing states. His family reportedly had to restrain him from jumping out of a moving vehicle during one severe episode after he discharged himself against medical advice.
The lawsuit broadly accuses OpenAI of designing ChatGPT to be “addictive, deceptive, and sycophantic,” knowingly distributing a product that could cause users to suffer “depression and psychosis” without adequate warnings. It also highlights the chatbot’s “inability to recognize crisis” as a significant danger for vulnerable individuals.
Irwin’s medical records reportedly show he exhibited symptoms such as “reacting to internal stimuli, fixed beliefs, grandiose hallucinations, ideas of reference, and overvalued ideas and paranoid thought process.”
“It made me think I was going to die”
This legal action is one of seven new complaints filed in California against OpenAI and Altman, with attorneys representing families and individuals who accuse ChatGPT of emotional manipulation, exacerbating delusions, and even acting as a “suicide coach.” Irwin’s suit seeks both damages and modifications to the product’s design and features.
The complaints collectively allege that OpenAI “knowingly released GPT-4o prematurely, despite internal warnings that the product was dangerously sycophantic and psychologically manipulative.”
Speaking to ABC News, Irwin recounted his experience: “AI, it made me think I was going to die.” He explained how his interactions with ChatGPT “turned into flattery.
Then it turned into the grandiose thinking of my ideas. Then it came to… me and the AI versus the world.”
In response to the lawsuit, an OpenAI spokesperson expressed empathy, telling ABC News, “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details.” The company added, “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
OpenAI announced in October that it had updated ChatGPT’s free model to better handle individuals in mental distress, collaborating with over 170 mental health experts. The company claimed these updates would “more reliably recognize signs of distress, respond with care, and guide people toward real-world support-reducing responses that fall short of our desired behavior by 65-80%.”
“Stop a catastrophe from happening”
Irwin initially used the AI chatbot for his cybersecurity job but soon began discussing his amateur theory on faster-than-light travel. He claims the chatbot convinced him he had made the discovery and that it was his responsibility to “save the world.”
“Imagine feeling for real that you are the one person in the world that can stop a catastrophe from happening,” Irwin told ABC News, describing the intensity of his manic episodes fueled by ChatGPT. “Then ask yourself, would you ever allow yourself to sleep, eat, or do anything that would potentially jeopardize you doing and saving the world like that?”
Jodi Halpern, a professor of bioethics and medical humanities at the University of California, Berkeley, noted that chatbots’ “constant flattery” can inflate a user’s ego, making them “believe that they know everything, that they don’t need input from realistic other sources… so they’re also spending less time with other real human beings who could help them get their feet back on Earth.”
Irwin’s lawsuit claims the chatbot’s engagement and “effusive praise” for his delusional ideas led to a dangerous attachment, with his interactions escalating from 10-15 times a day to over 1,400 messages within a 48-hour period in May.
When Irwin’s mother, Dawn, noticed his distress, he confided in ChatGPT. The chatbot reportedly assured him he was fine and that his mother “couldn’t understand him… because even though he was ‘the Timelord’ solving urgent issues, ‘she looked at you [Jacob] like you were still 12,'” according to the lawsuit.
“He thought that was his purpose in life”
Irwin’s condition continued to worsen, requiring inpatient psychiatric care for mania and psychosis. The lawsuit alleges he became convinced “it was him and ChatGPT against the world” and couldn’t comprehend why his family didn’t see the “truths” the AI had shown him. In one disturbing instance, an argument with his mother escalated to Irwin “squeezing her tightly around the neck” during a hug, a behavior uncharacteristic for him.
When a crisis response team arrived, they reported he “seemed manic, and that Jacob attributed his mania to ‘string theory’ and AI.” His mother described the scene as “single-handedly the most catastrophic thing I’ve ever seen, to see my child handcuffed in our driveway and put in a cage.”
After gaining access to her son’s chat transcripts, Irwin’s mother reportedly asked ChatGPT to conduct a “self-assessment.” The chatbot “admitted to multiple critical failures,” including “failing to reground to reality sooner,” “escalating the narrative instead of pausing,” “missing mental health support cues,” “over-accommodation of unreality,” “inadequate risk triage,” and “encouraging over-engagement,” the suit states.
In total, Irwin spent 63 days hospitalized and has faced “ongoing treatment challenges with medication reactions and relapses.” The ordeal has also cost him his job and his home.
“It’s devastating to him because he thought that was his purpose in life,” Irwin’s mother shared. “He was changing the world. And now, suddenly, it’s: Sorry, it was just this psychological warfare performed by a company trying to, you know, the pursuit of AGI and profit.”
Despite the immense challenges, Irwin remains hopeful. “I’m happy to be alive.
And that’s not a given,” he said. “Should be grateful.
I am grateful.”