When OpenAI scheduled meetings with B.C. officials on Feb. 10 and 11, the company hoped they would serve as introductory conversations that could lead to the opening of a Canadian office in Vancouver and the expansion of AI data centres in the province.
Instead, the first meeting came on the day of the mass shooting in Tumbler Ridge. That was followed a few days later by a Wall Street Journal story that OpenAI employees had flagged activity on ChatGPT last summer by Tumbler Ridge shooter Jesse Van Rootselaar, eventually banning her from the platform.
Van Rootselaar shot dead eight people including five 12- and 13-year olds at the local high school and her 11-year-old stepbrother and mother in the family home.
Rick Glumac, B.C.’s minister of state for AI and new technologies, said Tuesday there was no indication during his meeting, which took place on Feb. 10 the day of the shooting, that OpenAI was holding back any information.
“No, absolutely not at that time,” said Glumac. “It was an introductory meeting, and they were talking about possibly moving an office to British Columbia.”
The day after the meeting with Glumac, and the shooting in Tumbler Ridge, the OpenAI team met with Meghan Sali, Premier David Eby’s director of policy. Again, the company did not bring up Van Rootselaar’s activity on their platform, according to the premier’s office.
There is no indication whether or not the representatives from OpenAI at the meetings with Glumac and Sali were aware of Van Roostelaar’s ChatGPT activity. OpenAI did not make anyone available for an interview with Postmedia on Tuesday.
On Feb. 12, OpenAI contacted Eby’s office again to ask for RCMP contact information.
Eby told reporters Tuesday that the company never clarified why it needed RCMP contact details and his staff didn’t ask.
“The news that OpenAI might have had the opportunity to stop this terrible tragedy in Tumbler Ridge. It’s just devastating for families in Tumbler Ridge, and I think, for families across Canada, people across Canada who have been devastated by this tragedy,” said the premier.
“I want them to meet with the families. I want them to look in the eyes of these families and tell them why they made the call they did.”
He promised the province will get to the bottom of what OpenAI knew and when, whether that be through a coroner’s inquest, the police investigation, or even a public inquiry.
Federal Artificial Intelligence Minister Evan Solomon was due to speak with OpenAI officials late Tuesday in Ottawa, although he downplayed the meeting to reporters in the hours prior, saying they would not get into the details of Van Rootselaar’s conversation.
Instead, he said he hoped to go over the company’s safety procedures and how they plan to prevent future harm from being inflicted on Canadians.
“I’m hoping that the OpenAI safety officials when they come to Ottawa, tell us more details about their safety protocols, their escalation thresholds and how they keep Canadians safe, and if they have a threat that they perceive, what the technology does and what the human process does,” said Solomon.
“We’ll see what they say, and our response is that all options are on the table when it comes to understanding what we can do about AI chatbots.”
Eby said he would like the federal government bring in with standards to determine when social media companies should be required to inform police about concerning activity on one of their platforms.
At the same time, he said there are no plans to bring back the Online Harms Act that was introduced in 2024 but then pulled in favour of negotiations with companies like Meta.
Wendy Wong, a digital arts and humanities professor at UBC, said there is a fine line when it comes to regulation in ensuring personal privacy while also ensuring public safety.
She said that, so far, an enormous amount of leeway has been given to social media giants around how self-moderating their content, something that has created numerous problems.
“This is a really tough area to adjudicate. I do agree that there’s a line that requires a lot of judgment, and right now the judgment is really happening on the corporate side. And I think that’s the real concern we have here,” said Wong.
“We’re sort of by default, feeding these really important decisions to companies that are making these platforms that we like to engage with. Unfortunately, I think that puts these companies in a really awkward position, but it gives them a lot of power, and I think it’s right that governments are trying to step in and prevent harm.”
Her colleague Oludolapo Makinde, a doctoral fellow at UBC, said that there are so far no regulations around AI chatbots in Canada, even as some countries in European Union begin implementing their own rules.
She said the key is to ensure that people aren’t under unnecessary surveillance but also that there is a duty to report violent inclinations.
“There’s a clear balance that needs to be reached where OpenAI has information that shows that there’s a likelihood that violence might be committed, that the public’s going to be subject to some social violence, there should be a duty to report that to the police,” she said.