Man sues OpenAI over ChatGPT falsely telling a journalist he’s embezzling non-profit money

[ad_1]

Mark Walters, a radio host, is attempting to sue ChatGPT developer OpenAI over its chatbot publishing information that falsely accuses Walters of embezzlement and implicates him in a court case.

Man sues OpenAI over ChatGPT falsely telling a journalist he's embezzling non-profit money 39512

VIEW GALLERY – 2 IMAGES

Fred Riehl, the editor-in-chief of a gun website, went to OpenAI’s ChatGPT for information on the case The Second Amendment Foundation v. Robert Ferguson. Riehl asked the AI-powered chatbot to provide a summary of the course case, which he said it did quite quickly, spitting out a summary that directly states involvement by Mark Walters, a Georgia radio host that, according to ChatGPT, embezzled money from The Second Amendment Foundation (SAF).

Walters filed in Gwinnett County Superior Court on June 5th, claiming that OpenAI has acted negligently and has “published libelous material regarding Walters“. Riehl asked ChatGPT to provide more information on Walters, and OpenAI’s chatbot began giving details, such as providing a legal complaint filed by the founder and executive vice president of the Second Amendment Foundation (SAF). This complaint was lodged against Walters, who was identified as the SAF’s treasurer and Chief Financial Officer (CFO).

The complaint claimed “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports,” per ChatGPT.

However, none of these details were true. Walters claims in his suit that he’s never worked at the SAF nor been affiliated with the SAF in any way. These details were verified by the authentic, real-life complaint between SAF v. Ferguson, which doesn’t contain Walter’s name once or have anything to do with financial accounting claims. ChatGPT’s information on the case was incorrect and falsely implicated Walters.

These errors by ChatGPT and other language models of its type are widely referred to as hallucinations, and OpenAI has recognized that this is a problem it aims to fix. Whether hallucinations leave OpenAI liable for any damages is another thing entirely, as the individual suing would need to prove that ChatGPT’s false publication has damaged him/her.

In the case of Walters, who is suing over damages to his reputation, University of California Los Angeles Law School professor Eugene Volokh spoke to Gizmodo and said that while there are situations in that plaintiffs can sue AI makers for the content the AI tools produce, Walter’s case doesn’t actually show any damages being incurred to his reputation. Walters’s lawyers would have to prove that OpenAI acted with “knowledge of falsehood or reckless disregard of the possibility of falsehood” or “actual malice” to win.

Whether Walter can win the case or not, suits such as this highlight what we will likely see a lot of in the future.

For more information on this story, check out this link here.

[ad_2]