ChatGPT reportedly accused harmless man of murdering his kids

It has been over two years since ChatGPT exploded onto the world stage and, whereas OpenAI has superior it in some ways, there's nonetheless fairly just a few hurdles. One of many largest points: hallucinations, or stating false info as factual. Now, Austrian advocacy group Noyb has filed its second criticism towards OpenAI for such hallucinations, naming a particular occasion through which ChatGPT reportedly — and wrongly — said {that a} Norwegian man was a assassin.

To make issues, by some means, even worse, when this man requested ChatGPT what it knew about him, it reportedly said that he was sentenced to 21 years in jail for killing two of his kids and trying to homicide his third. The hallucination was additionally sprinkled with actual info, together with the variety of kids he had, their genders and the identify of his house city.

Noyb claims that this response put OpenAI in violation of GDPR. "The GDPR is obvious. Private information needs to be correct. And if it's not, customers have the best to have it modified to mirror the reality," Noyb information safety lawyer Joakim Söderberg said. "Exhibiting ChatGPT customers a tiny disclaimer that the chatbot could make errors clearly isn’t sufficient. You possibly can’t simply unfold false info and ultimately add a small disclaimer saying that every part you mentioned may not be true.."

Different notable situations of ChatGPT's hallucinations embody accusing one man of fraud and embezzlement, a courtroom reporter of kid abuse and a regulation professor of sexual harassment, as reported by a number of publications.

Noyb first criticism to OpenAI about hallucinations, in April 2024, centered on a public determine's inaccurate birthdate (so not homicide, however nonetheless inaccurate). OpenAI had rebuffed the complainant's request to erase or replace their birthdate, claiming it couldn't change info already within the system, simply block its use on sure prompts. ChatGPT replies on a disclaimer that it "could make errors."

Sure, there may be an adage one thing like, everybody makes errors, that's why they put erasers on pencils. However, in the case of an extremely well-liked AI-powered chatbot, does that logic actually apply? We'll see if and the way OpenAI responds to Noyb's newest criticism.

This text initially appeared on Engadget at https://www.engadget.com/ai/chatgpt-reportedly-accused-innocent-man-of-murdering-his-children-120057654.html?src=rss