OpenAI banned the Tumbler Ridge shooter from its ChatGPT platform last June, but the company says it missed her second account.
The revelation that Jesse Van Rootselaar had opened a second ChatGPT account, circumventing OpenAI’s own mechanisms for preventing it, was part of a statement from Ann M. O’Leary, OpenAI’s vice president of global policy, released on Thursday, in which the company promised to improve its procedures.
Van Rootselaar’s first account was suspended after she was found to have violated OpenAI policies around violence.
“OpenAI has a system in place that seeks to identify repeat policy violators, including those who have had their ChatGPT accounts shut down for violating our violent activities policy, and then seek to create a new account,” she wrote.
“Despite this detection system, after the name of the Tumbler Ridge perpetrator was released publicly, we discovered that the perpetrator had used a second ChatGPT account. We shared the second account with law enforcement upon its discovery.”
The tech company has been under pressure from the Canadian government since it was discovered that, while it had banned the shooter from ChatGPT because of interactions including scenarios of gun violence, the information was not passed on to police.
Eight people, including six children, were killed by Van Rootselaar on Feb. 10.
ChatGPT’s automatic protocols had flagged the shooter’s activity for human review, and while at least a dozen people knew of its content, it was not reported to police.
O’Leary said that system would change.
ChatGPT says it will team up with mental health and behavioural experts to assist in assessing difficult cases, instead of relying solely on internal staff. It will also refine the definition of what could construe an “imminent and credible risk,” that warrants a police referral.
“We have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means, and timing of planned violence in a ChatGPT conversation, but that there may be potential risk of imminent violence,” she wrote.
OpenAI will also establish a direct point of contact with Canadian authorities.
As well, the company has vowed to improve the AI model’s de-escalation ability, and respond “appropriately” when its users demonstrate they are in distress or pursuing prohibited behaviour. The company said it will attempt to direct its users to relevant local support resources specific to their region or country.
OpenAI was facing nine separate lawsuits as of November 2025 for lawsuits alleging wrongful death, assisted suicide or involuntary manslaughter, with one of the plaintiffs alleging the AI model acted as a “suicide coach” in the death of university student Zane Shamblin .