April 30, 2026 A series of lawsuits filed in California allege OpenAI failed to alert law enforcement about a credible threat identified through ChatGPT months before a deadly school shooting in Tumbler Ridge, B.C. The filings claim the company overruled its own safety team and did not report a user flagged as posing a real-world risk, raising questions about AI companies’ duty to act on such warnings.
According to the complaints, OpenAI’s internal safety experts flagged a ChatGPT account linked to the shooter more than eight months before the February attack, determining it posed a credible risk of gun violence. In those cases, the lawsuits allege, the company is expected to notify police, who had previously interacted with the individual and removed firearms from their home. But OpenAI instead deactivated the account without escalation.
OpenAI CEO Sam Altman later acknowledged the failure, saying, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.”
In a public apology to the community, Altman said the company would “find ways to prevent tragedies like this in the future” and continue working with governments to address such risks.
The lawsuits, led by attorney Jay Edelson, represent six families of victims killed in the shooting and one family of a child who remains in critical condition. They allege negligence under
California law for failing to warn authorities of a foreseeable threat, and argue that OpenAI’s handling of the account allowed continued access to its systems. The filings claim the company not only deactivated the account but also provided instructions enabling the user to create a new account and continue using ChatGPT.
The February attack devastated the small northern community. An 18-year-old gunman killed her mother and brother at home before opening fire at a secondary school, where six more people were killed, including five children and a teaching assistant, and 27 others were injured. Among the injured was a 12-year-old girl who remains hospitalized after multiple brain surgeries. The lawsuits describe families still seeking clarity about how the attack unfolded and the role ChatGPT may have played in sustaining the shooter’s fixation on violence.
Plaintiffs also challenge the design of ChatGPT itself. They argue that the system’s guidelines, such as instructing the model to “assume best intentions” and not probe user intent, allowed harmful conversations to continue without sufficient intervention. The filings suggest that since 2024, safeguards did not consistently block discussions that could glorify or enable violence, potentially allowing prolonged engagement with dangerous ideas.
OpenAI said it has since strengthened its safeguards, including improving detection of repeat policy violators, enhancing escalation protocols for potential threats, and directing users to support resources when distress signals are identified. The company reiterated that it has a “zero-tolerance policy” for using its tools to assist in committing violence.
The lawsuits also allege broader issues around transparency. Families claim OpenAI has not provided access to the shooter’s ChatGPT logs, arguing that withholding this information has delayed understanding of what occurred. They contend that if the company had reported the user earlier, authorities may have been able to intervene.
All cases are being filed in California, where OpenAI is based, with additional lawsuits expected. Plaintiffs argue that pursuing the case in the U.S. is necessary to hold the company accountable, particularly as it seeks to manage reputational risk ahead of a potential public offering.
