No 'substantial' new safety measures offered by OpenAI following Tumbler Ridge shooting, says minister

News Room
By News Room 5 Min Read

OpenAI officials offered no “substantial” new safety measures in a meeting with Canadian ministers after reports the Tumbler Ridge mass shooter’s ChatGPT account was banned for concerning content last year but not escalated to police. 

“We expressed our disappointment that no substantial new safety measures were presented at this time. OpenAI indicated they will return shortly with more concrete proposals tailored to the Canadian context,” Artificial Intelligence Minister Evan Solomon said in a statement Tuesday night following the meeting. 

The meeting was also attended by Public Safety minister, Gary Anandasangaree, Justice Minister Sean Fraser, and Minister of Canadian Identity and Culture Marc Miller.

“We made it clear that Canadians expect credible warning signs of serious violence to be escalated in a timely and responsible way. Internal review alone is not sufficient when public safety is at stake,” Solomon said, but added specific details of the devastating school shooting on Feb. 10 were not discussed due to the ongoing police investigation. 

Solomon said the government is “reviewing broader measures to ensure that AI systems and platforms operating in Canada have clear standards and accountability.”

Solomon summoned OpenAI officials to the Ottawa meeting to explain a “deeply disturbing” report from the Wall Street Journal that company employees warned leadership about content posted to ChatGPT by Jesse Van Rootselaar last June, flagged internally through an automated review, but that law enforcement was not informed. 

On Feb. 10, 18-year-old Van Rootselaar shot her mother, half-brother and six other people before being found dead from a self-inflicted gunshot wound in the small B.C. town of Tumbler Ridge, the RCMP has said. 

It remains unclear what role, if any, ChatGPT played in the shooter’s planning and execution of the deadly attack.

OpenAI confirmed that the company identified a ChatGPT account linked to Van Rootselaar through a process that relies on automated tools and human verification to flag potentially violent uses of the technology. The company then banned the account, and said it considered referring the account to law enforcement before concluding the activity did not meet the threshold to warrant doing so, which is an “imminent and credible risk of serious physical harm to others.”

An OpenAI spokesperson said that upon learning of the shooting, the company “proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.” 

B.C. Premier David Eby has also requested a meeting with OpenAI officials, he said earlier on Tuesday at an unrelated press conference in Victoria. Last week, he said they revealed no information about the shooter’s history with ChatGPT in a preplanned meeting the day after the shooting. 

Eby said British Columbians, and most importantly, the families of the people killed — including four 12-year-olds and one 13-year-old at the local school — deserve to know what OpenAI knew and why they made the decisions they did. 

“I will never forget sitting at the table in a meeting room … talking to a dad who walked me through the last moments of his child’s life,” he said. “I want them (OpenAI) to know that. I want them to hear that from me. I want them to meet with the families. I want them to look in the eyes of these families and tell them why they made the call they did.”

Solomon has said “all options are on the table” when it comes to regulating AI chatbots, some of which are facing allegations of encouraging suicide and other harmful acts. 

Measures currently underway include a recently introduced justice bill tackling non-consensual sexual deepfakes and online child exploitation, and plans for legislation on modernizing data and privacy law. The government is also considering an online harms bill, similar to one proposed by the last Liberal government, that could regulate AI chatbot providers and require transparency. 

With files from Raisa Patel

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *