OpenAI banned the Tumbler Ridge shooter from its ChatGPT platform last June, but the company says it missed her second account.
The revelation that Jesse Van Rootselaar had opened a second ChatGPT account, circumventing OpenAI’s own mechanisms for preventing it, was part of a statement from Ann M. O’Leary, OpenAI’s vice president of global policy, released on Thursday, in which the company promised to improve its procedures.
Van Rootselaar’s first account was suspended after she was found to have violated OpenAI policies around violence.
“OpenAI has a system in place that seeks to identify repeat policy violators, including those who have had their ChatGPT accounts shut down for violating our violent activities policy, and then seek to create a new account,” she wrote.
“Despite this detection system, after the name of the Tumbler Ridge perpetrator was released publicly, we discovered that the perpetrator had used a second ChatGPT account. We shared the second account with law enforcement upon its discovery.”
The tech company has been under pressure from the Canadian government since it was discovered that, while it had banned the shooter from ChatGPT because of interactions including scenarios of gun violence, the information was not passed on to police.
Eight people, including six children, were killed by Van Rootselaar on Feb. 10.
ChatGPT’s automatic protocols had flagged the shooter’s activity for human review, and while at least a dozen people knew of its content, it was not reported to police.
O’Leary said that system would change.
ChatGPT says it will team up with mental health and behavioural experts to assist in assessing difficult cases, instead of relying solely on internal staff. It will also refine the definition of what could construe an “imminent and credible risk,” that warrants a police referral.
“We have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means, and timing of planned violence in a ChatGPT conversation, but that there may be potential risk of imminent violence,” she wrote.
OpenAI will also establish a direct point of contact with Canadian authorities.
As well, the company has vowed to improve the AI model’s de-escalation ability, and respond “appropriately” when its users demonstrate they are in distress or pursuing prohibited behaviour. The company said it will attempt to direct its users to relevant local support resources specific to their region or country.
Premier David Eby said his staff had met with OpenAI Thursday to express their disappointment and concern about the company’s conduct in the incident, and that their changes were “cold comfort” to the people and family of Tumbler Ridge.
OpenAI shared during the meeting that they had adjusted their threshold for reporting, and that Van Rootselaar’s conduct would have warranted police notification had it happened today.
“Obviously, it’s hard to know how to react to that kind of information. That words ‘thank you’ are not the right ones. Clearly, they tragically missed the mark in bringing this information forward,” he said. “The consequences of that will be borne by the people of Tumbler Ridge, and families of Tumbler Ridge for the rest of their lives. These are not small stakes, and it illustrates why these companies cannot be trusted to set their own reporting thresholds, and especially to set their own thresholds where there are no apparent consequences for not meeting them.”
Eby said a national standard needs to be set for similar tech companies to establish a minimum threshold of reporting. He’d declined to participate in the meeting because he wanted to interact with CEO Sam Altman directly — something Altman has agreed to.
OpenAI’s policy changes are welcomed, he said, but information is lacking on what they actually mean.
“The big trick with the changes that have been proposed by OpenAI is we don’t actually know what they will mean, because we don’t actually know what information OpenAI had in terms of the information input by the shooter and what the Chatbot responded with to the shooter,” Eby said. “We don’t know whether ChatGPT assisted the shooter in planning. We don’t know the information the shooter put in, and whether any reasonable person presented with the transcript after the flag went up would have gone to police. We just simply don’t know.”
Eby said the police have the transcripts and have issued preservation orders to any tech or social media companies that are part of the investigation. Those records will eventually become public, he said, though couldn’t put a timeline on it because the actual mechanism for release wasn’t established yet.
OpenAI was facing nine separate lawsuits as of November 2025 for lawsuits alleging wrongful death, assisted suicide or involuntary manslaughter, with one of the plaintiffs alleging the AI model acted as a “suicide coach” in the death of university student Zane Shamblin .