Canada says OpenAI Chief Executive Sam Altman agreed to take immediate steps to strengthen safety protocols regarding notifying police about potentially suspicious use of the company's ChatGPT chatbot. The commitment follows a tragic shooting in Tumbler Ridge, B.C., where the suspect had maintained a ChatGPT account that the company flagged internally but never reported to authorities.
Evan Solomon met virtually for a half-hour with OpenAI head Sam Altman on Wednesday afternoon. During their discussion, Solomon pressed for concrete commitments rather than vague pledges. Solomon said Altman agreed to include Canadian experts in mental health and law within OpenAI's safety office, where the company assesses threats and whether or not to inform police.
The specific failures that prompted the meeting are significant. The shooter's ChatGPT account was banned and flagged internally by OpenAI eight months before the attack, but despite the account's posts about gun violence, the company chose not to inform police until after the killings. When examined under OpenAI's stated internal criteria at the time, the company concluded the account's activity did not meet the threshold for law enforcement notification.
Altman also confirmed the company would apply its new safety standards retroactively and review previously flagged cases, which will determine whether additional incidents that would have been referred to law enforcement under OpenAI's new safety standards were missed, and ensure they are promptly reported to the RCMP.
From a regulatory standpoint, Canada's approach reflects tension between two legitimate concerns. Solomon acknowledged that "the companies are the ones in charge of these interactions, we don't have access... Obviously we can't monitor these. How do we keep Canadians safe? By making sure their safety protocols are more rigorous, transparent and available to Canadians, and putting options on the table so we can keep Canadians safe from a regulatory framework."
Yet there is merit to caution about expanding law enforcement reporting obligations. As one analysis notes, the response "cannot simply be to require companies to monitor and report private conversations to law enforcement." Privacy experts contend that any new regulatory measures must not violate individuals' privacy rights and should focus on "ensuring there is full disclosure of user safety policies and how they are implemented and enforced."
Solomon also requested OpenAI allow experts from the Canadian AI Safety Institute, a federal body within his department, be allowed to do a full, detailed assessment of the company's new safety protocols. This oversight mechanism offers a pragmatic middle path: rather than prescriptive legislation, embedding Canadian expertise within OpenAI's decision-making process creates accountability whilst preserving the company's operational autonomy.
The Tumbler Ridge incident exposes a real gap in the current architecture of AI governance. Voluntary corporate commitments have failed. Solomon has said "all options are on the table," but no specific legislative measures have been announced. The question now is whether Solomon's consultative approach and OpenAI's willingness to incorporate external expertise can provide sufficient safeguards, or whether Canada will ultimately require statutory minimum thresholds for threat reporting.