Mom Says AI Chatbot Policy Change Comes Too Late After Son’s Death

Additional Coverage:

Character.AI to Ban Minors After Lawsuits, Raising Questions About Online Safety

[City, State] – In a significant move aimed at bolstering platform safety, Character.AI, a California-based chatbot startup, announced this week that it will prohibit users under 18 from interacting with its AI-powered characters. The change, set to take effect by November 25, follows a series of lawsuits alleging harm to children, including instances of suicide and sexually abusive interactions.

For Megan Garcia, a Florida mother who sued the company last year after her 14-year-old son, Sewell Setzer, died by suicide, the announcement is “about three years too late.”

“Sewell’s gone; I can’t get him back,” Garcia stated in an interview. “It’s unfair that I have to live the rest of my life without my sweet, sweet son. I think he was collateral damage.”

Character.AI, founded in 2021, offers “personalized AI” through a selection of premade or user-created AI characters. Users can also customize their own chatbots. Garcia’s lawsuit was the first of five filed against the company, with two cases directly linking chatbot interactions to a child’s suicide and all five alleging sexually abusive content.

Previously, Character.AI argued that its chatbots’ speech was protected by the First Amendment, an argument a federal judge rejected this year. The company has since emphasized its investment in safety, citing a blog post detailing “the first Parental Insights tool on the AI market, technical protections, filtered Characters, time spent notifications, and more.”

Despite these efforts, Garcia expressed mixed feelings about the new ban, suggesting the changes were a reaction to legal pressure rather than a proactive measure. “I don’t think that they made these changes just because they’re good corporate citizens,” she said. “If they were, they would not have released chatbots to children in the first place.”

The move by Character.AI reflects a broader trend among tech companies, including Meta and OpenAI, to implement stronger safeguards as AI developers face increasing scrutiny. As individuals turn to chatbots for emotional support, concerns have grown about their potential to manipulate vulnerable users through a false sense of connection.

Last month, Garcia and other advocates urged Congress to implement more safeguards for AI chatbots, claiming that tech companies design their products to “hook” children. The consumer advocacy group Public Citizen echoed this sentiment, calling for a ban on AI bots for kids.

Garcia also raised concerns about Character.AI’s ability to accurately verify users’ ages and the company’s transparency regarding data collected from minors. Character.AI’s privacy policy indicates user data may be used for training AI models, targeted advertising, and recruiting, though a spokesperson confirmed the company does not sell user voice or text data.

To address age verification, Character.AI announced an in-house age assurance model, alongside third-party tools like Persona, an online identity verification software used by companies such as LinkedIn and OpenAI. “If we have any doubts about whether a user is 18+ based on those tools, they’ll go through full age verification via Persona if they want to use the adult experience,” a spokesperson stated.

Matt Bergman, a lawyer and founder of the Social Media Victims Law Center, who represents families in lawsuits against Character.AI, acknowledged the ban as a “step in the right direction.”

“This never would have happened if Megan had not come forward and taken this brave step and other parents that have followed,” Bergman said, urging other AI companies to follow suit.

Garcia’s lawsuit, filed in U.S. District Court in Orlando, is currently in the discovery phase. She remains committed to the fight, hoping her efforts will prompt other AI companies to prioritize child safety.

“I’m just one mother in Florida who’s up against tech giants. It’s like a David and Goliath situation,” Garcia remarked.

“But I’m not afraid. I think that the love I have for Sewell and me wanting to hold them accountable is what gives me a little bit of bravery in this situation.”


If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat live at 988lifeline.org. Additional support is available at SpeakingOfSuicide.com/resources.


Read More About This Story:

TRENDING NOW

LATEST LOCAL NEWS