AI Company Ignored Warnings Before Mass Shooting

Additional Coverage:

Tumbler Ridge Tragedy: OpenAI Employees Reportedly Raised Concerns Over Shooter’s Chatbot Interactions Months Before Attack

Tumbler Ridge, BC – A new report from the Wall Street Journal has brought to light a disturbing detail surrounding the horrific Tumbler Ridge mass shooting: employees at OpenAI, the company behind the popular AI chatbot ChatGPT, were reportedly aware of concerning interactions between the shooter, Jesse Van Rootselaar, and its AI months prior to the tragic event, but did not alert authorities.

Around a dozen OpenAI employees were reportedly privy to these alarming interactions, which were initially flagged by an automated review system. According to individuals familiar with the matter, these exchanges, spanning multiple days, included discussions of violent scenarios involving gun violence.

This information surfaced months before Van Rootselaar, 18, carried out a deadly rampage on February 10, killing his mother, step-brother, five students, and a teacher in Tumbler Ridge, British Columbia, before taking his own life. Twenty-five others were injured in the attack.

OpenAI’s official policy dictates that law enforcement should only be contacted in cases of an “imminent threat of real-world harm or violence.” While some employees reportedly advocated for contacting police, the company ultimately decided against it.

Authorities later confirmed that Van Rootselaar, a biological male who had identified as female since age six, had dropped out of the very school he attacked. Police were reportedly aware of Van Rootselaar’s mental health struggles, having made multiple visits to his home for various incidents in the past.

Further investigation into Van Rootselaar’s online footprint revealed a disturbing obsession with death, including active participation on a website hosting videos of murders, according to the New York Post. His social media accounts also contained images of him with firearms and content related to hallucinogenic drugs. Concerns about his behavior were apparently not new, as his mother reportedly expressed alarm in a Facebook parent’s group back in 2015.

A spokesperson for OpenAI informed Fox News Digital that Van Rootselaar’s account was banned in June 2025 for violating its usage policies. However, the company determined that the activity did not meet the threshold for alerting law enforcement. The spokesperson emphasized the company’s need to balance privacy concerns, noting that overly frequent referrals to police could lead to unintended harm.

OpenAI’s chatbot model is designed to deter real-world harm when it detects dangerous situations. The company stated that it proactively reached out to the Royal Canadian Mounted Police (RCMP) after the incident and is cooperating with their investigation by providing information on Van Rootselaar’s chatbot activity.

In a statement following the tragedy, OpenAI said, “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”


Read More About This Story:

TRENDING NOW

LATEST LOCAL NEWS