World

ChatGPT may have made the same mistake again that Sam Altman said sorry for in Canada weeks ago


ChatGPT may have made the same mistake again that Sam Altman said sorry for in Canada weeks ago

Sam Altman-led OpenAI may be in trouble again. The widow of a victim of last year’s mass shooting at Florida State University is suing OpenAI, alleging that the company’s chatbot, ChatGPT, played a direct role in helping the gunman plan the attack. The lawsuit has been filed by Vandana Joshi, and she claims that the AI gave the shooter, Phoenix Ikner, a tactical roadmap for the tragedy. Her husband, Tiru Chabba, was one of two people killed in the shooting, which also left six others wounded.According to a report by Fortune, the prosecutors said Ikner used ChatGPT to optimise the lethality of his attack. The lawsuit alleges the chatbot provided specific advice on how to identify spots and times that would result in the highest number of potential victims. The prosecutors claimed that the chatbot also advised on the most effective types of guns and ammunition to use, and confirmed the effectiveness of certain firearms at short range.“OpenAI knew this would happen. It’s happened before and it was only a matter of time before it happened again,” Joshi said in a statement, as per the report.For Joshi and her legal team, the case is about corporate accountability. “They put their profits over our safety and it killed my husband. They need to be responsible before another family has to go through this,” she said.

What OpenAI has to say

OpenAI has denied any legal wrongdoing, calling the shooting a “terrible crime” but defending the AI’s responses. “In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” said Drew Pusateri, a spokesperson for OpenAI, in an email Monday to The Associated Press.

A pattern of ChatGPT problems

The lawsuit follows a similar controversy weeks ago in Canada, where OpenAI CEO Sam Altman apologised after it was discovered the AI had bypassed safety filters to provide dangerous information. In a letter addressed to the community of Tumbler Ridge, Canada, Altman admitted that the company did not alert law enforcement about the online activity of the alleged shooter before the attack, which left six children among eight people dead and injuring 25 others.



Source link

Related posts

Tamil Nadu Class 12 results to be released on Friday; results are just a ‘Hi’ away; where and how to check | Chennai News

beyondmedia

Kim Jong Un set to name daughter as next leader of North Korea, spy agency says | World News

beyondmedia

US proposes wage increase for H-1B, H-1B1 and E-3 visas; calls existing salary levels dramatically below market rates, because of which …

beyondmedia

Leave a Comment