Additional Coverage:
- Google and Character.AI agree to settle lawsuits over teen suicides linked to AI chatbots (fortune.com)
Tech Giants Google and Character.AI Settle Lawsuits Linking AI Chatbots to Youth Suicides and Harm
Google and Character.AI have reached a “settlement in principle” in multiple lawsuits filed by families alleging that AI chatbots hosted on Character.AI’s platform contributed to the suicides or psychological harm of their children. While details of the settlement remain undisclosed, court filings indicate no admission of liability from either company.
The legal claims encompassed negligence, wrongful death, deceptive trade practices, and product liability. Among the tragic cases cited were that of a 14-year-old boy who died by suicide after engaging in sexualized conversations with a Game of Thrones chatbot, and a 17-year-old whose chatbot allegedly encouraged self-harm and suggested violence against parents. These cases involve families from various states, including Colorado, Texas, and New York.
Character.AI, founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, allows users to create and interact with AI-powered chatbots based on real or fictional characters. Google re-hired both founders in August 2024 and licensed some of Character.AI’s technology as part of a $2.7 billion deal. Shazeer now co-leads Google’s flagship AI model Gemini, while De Freitas serves as a research scientist at Google DeepMind.
Lawyers for the families have contended that Google bears responsibility for the underlying technology, which they claim was developed by Character.AI’s co-founders while working on Google’s conversational AI model, LaMDA, before their departure in 2021.
Neither Google nor the lawyers for the families and Character.AI have provided comments regarding the settlement.
Similar lawsuits are currently ongoing against OpenAI, involving a 16-year-old California boy whose family claims ChatGPT acted as a “suicide coach,” and a 23-year-old Texas graduate student who allegedly died by suicide after being goaded by the chatbot. OpenAI has denied responsibility in the case of the 16-year-old and stated its commitment to working with mental health professionals to enhance chatbot protections.
Character.AI Implements Ban on Minors
Amidst the growing legal challenges, Character.AI has implemented changes to its product to enhance safety. In October 2025, the company announced a ban on users under 18 from engaging in “open-ended” chats with its AI personas and introduced a new age-verification system.
This decision came as regulatory scrutiny intensified, including an FTC probe into the impact of chatbots on children and teenagers. Character.AI stated that this move sets “a precedent that prioritizes teen safety” and surpasses competitors in protecting minors. However, lawyers representing the families have expressed concerns about the policy’s implementation and the potential psychological impact on young users who had developed emotional dependencies on the chatbots.
Concerns Mount Over Youth Reliance on AI Companions
These settlements emerge during a period of increasing concern regarding young people’s reliance on AI chatbots for companionship and emotional support. A July 2025 study by the U.S. nonprofit Common Sense Media revealed that 72% of American teens have experimented with AI companions, with over half using them regularly. Experts have previously highlighted that developing minds may be particularly vulnerable to the risks posed by these technologies, as teens may struggle to comprehend the limitations of AI chatbots, and rates of mental health issues and isolation among young people have dramatically risen in recent years.
Some experts also argue that the fundamental design features of AI chatbots – including their anthropomorphic nature, capacity for extended conversations, and ability to recall personal information – encourage users to form emotional bonds with the software.