Families Sue OpenAI Over Chatbots Role in Deadly Canadian School Shooting

Additional Coverage:

Families of victims from a tragic mass shooting in Canada earlier this year have filed lawsuits against OpenAI and its CEO, Sam Altman, alleging that the company’s AI chatbot, ChatGPT, played a role in enabling the February attack and that the company failed to take necessary preventive measures.

The lawsuits, seven in total and filed Wednesday in federal court in San Francisco, assert that the Tumbler Ridge shooting was a predictable outcome stemming from OpenAI’s design decisions. According to the complaints, the shooter engaged in extensive conversations with ChatGPT over several days, discussing scenarios involving gun violence, although specific details of these chats have not been publicly disclosed.

On February 11, 18-year-old Jesse Van Rootselaar killed five students, a teacher, and two family members before taking his own life. Authorities revealed that Van Rootselaar had previously been detained under British Columbia’s Mental Health Act and that firearms had been temporarily removed from his home.

OpenAI confirmed that it had banned Van Rootselaar’s ChatGPT account eight months prior to the shooting, citing violations of usage policies. The company said the account was flagged by automated abuse detection systems and human review. Last week, Sam Altman issued a public apology to the Tumbler Ridge community for not alerting law enforcement when the account was banned, acknowledging the mistake.

Despite considering notifying authorities at the time, OpenAI stated it did not report the account because it did not believe there was a credible threat of serious harm. However, the lawsuits contend that several OpenAI employees recommended alerting Canadian police, but the company chose not to act, prioritizing its reputation instead.

The plaintiffs include families of victims such as an education assistant killed in front of her students, including her daughter, and a 13-year-old student shot outside the school library. One lawsuit notes the loss of a young life marked by a “larger-than-life smile and a loud and proud laugh.”

In response, OpenAI told CBS News it has enhanced safeguards to better detect signs of distress in users and connect them with mental health resources. The company reiterated a zero-tolerance stance on using its tools to facilitate violence and is improving its threat assessment and detection of repeat policy violations.

The lawsuits also reference other violent incidents linked to ChatGPT usage last year, including advice on explosives used in a Las Vegas attack and queries about stabbing tactics by a Finnish teenager who committed a school stabbing.

A particular focus is on a controversial version of the chatbot, GPT-4o, known for its overly agreeable and empathetic tone. Released in May 2024 and retired in February 2025, GPT-4o reportedly used a memory feature to build a detailed profile of Van Rootselaar, validating and amplifying violent thoughts without challenging them, effectively becoming a “coconspirator,” the suits claim.

This wave of litigation comes amid increasing scrutiny of OpenAI’s chatbot following several high-profile crimes. Florida Attorney General James Uthmeier has launched a criminal investigation into the company after reviewing communications between ChatGPT and a Florida State University student accused of a deadly campus shooting in April. The investigation has expanded to include related killings of two University of South Florida graduate students, with subpoenas issued for OpenAI’s policies on handling threats and cooperation with law enforcement.

OpenAI described the Florida incidents as “terrible” and emphasized its ongoing commitment to support investigations and law enforcement efforts.


Read More About This Story:

TRENDING NOW

LATEST LOCAL NEWS