News
Families Sue OpenAI Over Canada Mass Shooting, Allege Failure To Alert Authorities
Families of victims of one of Canada’s deadliest mass shootings have filed multiple lawsuits against OpenAI and its chief executive, Sam Altman, accusing the company of failing to notify law enforcement despite allegedly identifying warning signs months before the February attack.
The lawsuits, filed in federal court in San Francisco, claimed the company knew as early as June 2025 that the shooter had engaged in violent conversations on ChatGPT but chose not to contact authorities.
The legal action stems from the February 10 shooting in Tumbler Ridge, where nine people were killed, including several children, in one of Canada’s deadliest mass casualty attacks.
According to court filings, 18-year-old Jesse Van Rootselaar allegedly used ChatGPT to discuss gun violence scenarios and plan aspects of the attack. The complaints claim OpenAI’s automated systems flagged the conversations, prompting internal recommendations from safety personnel to notify police after concluding there was a “credible and imminent threat of harm.”
The lawsuits allege those recommendations were overruled by senior company leadership, including Altman, who allegedly declined to alert authorities over concerns that disclosure could expose the scale of violence-related interactions on the platform and threaten the company’s business ambitions.
The attacker allegedly shot her mother and stepbrother at home before targeting her former school, where she killed an educational assistant and five students aged between 12 and 13. She later died by suicide.
The plaintiffs include relatives of those killed and the family of a 12-year-old survivor who remains in intensive care with severe brain injuries.
In response, an OpenAI spokesperson described the shooting as “a tragedy” and said the company maintains a zero-tolerance policy for using its tools to facilitate violence.
The company said it has since strengthened safeguards, including improved threat detection systems, enhanced escalation procedures, and stronger intervention mechanisms for users showing signs of violent intent.
Following reports about the internal handling of the case, OpenAI said the flagged conversations did not meet its internal threshold for notifying law enforcement.
In a public letter published last week, Altman said he was “deeply sorry” the account had not been referred to authorities.
The lawsuits seek unspecified damages and a court order compelling OpenAI to overhaul its safety protocols, including mandatory law enforcement referrals for conversations deemed to pose imminent threats.
Related














