World

OpenAI employees wanted to alarm Canadian Police about school shooter months ago, here’s why her conversation with ChatGPT was flagged by company’s Review system


OpenAI employees wanted to alarm Canadian Police about school shooter months ago, here's why her conversation with ChatGPT was flagged by company's Review system

Sam Altman’s OpenAI reviewed and debated whether to alert Canadian police about troubling ChatGPT conversations months before an 18-year-old was identified as the suspect in a deadly school shooting in British Columbia, according to a Wall Street Journal report. The discussions took place after the user’s interactions with the chatbot were flagged by OpenAI’s internal review systems for references to gun violence. While some employees pushed for law enforcement to be notified, company leaders ultimately decided the activity did not meet the threshold required to contact authorities, the report said.

Conversations flagged by OpenAI’s review system

As per WSJ, the user, later identified by Canadian police as Jesse Van Rootselaar, used ChatGPT in June last year to describe violent scenarios involving firearms over several days. The conversations were flagged by OpenAI’s automated monitoring tools, which are designed to detect potential risks of real-world harm.The flagged content prompted internal concern. Around a dozen OpenAI employees reportedly discussed whether the posts suggested a credible threat. Some staff members believed the conversations could indicate possible real-world violence and urged senior leaders to inform Canadian law enforcement.

Why OpenAI did not contact police

OpenAI ultimately chose not to alert authorities. A company spokesperson told WSJ that Van Rootselaar’s account was banned, but her activity did not meet the company’s standard for reporting to law enforcement. That standard requires a “credible and imminent risk of serious physical harm to others.”The spokesperson said OpenAI balances potential safety risks against user privacy and the harm that could come from involving police without clear evidence of an immediate threat.On February 10, Van Rootselaar was found dead at the scene of a school shooting in Tumbler Ridge, British Columbia, from what police described as a self-inflicted injury. Eight people were killed and at least 25 were injured. The Royal Canadian Mounted Police later named her as the suspect.After learning of the attack, OpenAI contacted the RCMP and said it is cooperating with investigators. “Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” the company said.

Broader debate around AI and public safety

The case highlights growing questions around how AI companies handle sensitive user data. OpenAI told the Wall Street Journal that it trains its systems to discourage harm and routes concerning conversations to human reviewers, who can contact law enforcement if there is an immediate threat.Canadian police said Van Rootselaar had prior contact with authorities related to mental health concerns, and firearms had previously been removed from her residence. Investigators are now reviewing her online activity, including a video game simulation of a mass shooting and social media posts related to firearms, as part of the ongoing investigation.



Source link

Related posts

Video: CCTV shows inmates attacking policemen, fleeing juvenile home; two Pakistani nationals among three | India News

beyondmedia

‘O Romeo’ box office collections day 9: Shahid Kapoor film crosses Rs 52.6 crore; Sees strong Saturday growth | Hindi Movie News

beyondmedia

Wake-up call for individual taxpayers! Foreign assets reporting in focus – what you should know

beyondmedia

Leave a Comment