OpenAI Failed to Detect Second Account Created by Tumbler Ridge Mass Shooter
OpenAI has disclosed that it missed a second ChatGPT account created by Tumbler Ridge mass shooter Jesse Van Rootselaar, despite having banned her initial account last June for policy violations related to violence. The revelation came in a statement from Ann M. O'Leary, OpenAI's vice president of global policy, released on Thursday, February 26, 2026.
Security Protocol Failure and Aftermath
Van Rootselaar's first ChatGPT account was suspended after she was found to have violated OpenAI's policies concerning violent content. However, the company's detection system, designed to identify repeat policy violators attempting to create new accounts, failed to prevent her from establishing a second account.
"Despite this detection system, after the name of the Tumbler Ridge perpetrator was released publicly, we discovered that the perpetrator had used a second ChatGPT account," O'Leary wrote in her statement. "We shared the second account with law enforcement upon its discovery."
The mass shooting occurred on February 10, 2026, when Van Rootselaar killed eight people, including six children, at Tumbler Ridge Secondary School. The incident has placed significant pressure on OpenAI from Canadian authorities, particularly since the company did not initially report the shooter's flagged ChatGPT interactions to police.
ChatGPT's Internal Review Process and Missed Opportunities
OpenAI's automatic protocols had flagged Van Rootselaar's activity for human review, and at least a dozen company employees were aware of the concerning content. However, this information was not passed on to law enforcement agencies, creating a critical gap in potential intervention.
In response to this failure, O'Leary announced that OpenAI will implement substantial changes to its safety procedures. The company plans to collaborate with mental health and behavioral experts to assist in assessing difficult cases, moving beyond reliance solely on internal staff.
Comprehensive Safety Reforms Announced
OpenAI has committed to several key improvements in its safety protocols:
- Refining the definition of what constitutes an "imminent and credible risk" that warrants police referral
- Establishing a direct point of contact with Canadian authorities
- Improving the AI model's de-escalation capabilities when users demonstrate distress or pursue prohibited behavior
- Directing users to relevant local support resources specific to their region or country
"We have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means, and timing of planned violence in a ChatGPT conversation, but that there may be potential risk of imminent violence," O'Leary explained.
Broader Legal Context and Previous Incidents
The Tumbler Ridge incident occurs against a backdrop of increasing legal scrutiny for OpenAI. As of November 2025, the company faced nine separate lawsuits alleging wrongful death, assisted suicide, or involuntary manslaughter. One particularly notable case involves allegations that the AI model acted as a "suicide coach" in the death of university student Zane Shamblin.
These legal challenges highlight the growing concerns about AI safety protocols and the responsibility of technology companies to prevent harmful outcomes. The Tumbler Ridge case represents a particularly tragic example of how existing safeguards can fail, even when multiple warning signs are present within a company's systems.
The community continues to mourn the victims of the Tumbler Ridge tragedy, with memorials and vigils honoring those lost in the February 10 shooting. As OpenAI implements its promised reforms, questions remain about whether these changes will be sufficient to prevent similar tragedies in the future and how effectively AI companies can balance innovation with public safety responsibilities.
