Radio host sues OpenAI for what he calls an “AI hallucination”
OpenAI, the company behind generative AI platform ChatGPT, has been sued for defamation after a man was allegedly the victim of an ” AI hallucination “. As Gizmodo reported, a reporter asked the AI chatbot to summarize The Second Amendment Foundation v. Robert Ferguson ,a lawsuit filed by the gun advocacy group against Attorney General Bob Ferguson over Washington state gun laws. In its response, the AI allegedly claimed that a radio host named Mark Walters was accused of embezzling money from the Second Amendment Foundation, even though Walters was not involved in the case at all. In response, Walters sued OpenAI over the ChatGPT error, accusing the company of defamation and damaging his reputation.
The first lawsuit against AI is the legal responsibility of artificial intelligence models
This lawsuit is the first of its kind and could shape how the legal system treats these types of defamation lawsuits against GA in the future. However, Gizmodo spoke to a legal expert who said the merits of this particular case are shaky, although that doesn’t mean there won’t be stronger lawsuits in the future. The embezzlement claim isn’t the only falsehood OpenAI’s ChatGPT allegedly provided in response to firearms journalist, Fred Riehl’s inquiry about The Second Amendment Foundation v. Robert Ferguson case. The platform allegedly claimed that Walters was the chief financial officer and treasurerof the Second Amendment Foundation, which allowed him to embezzle funds for “personal expenses” as well as manipulate “financial records and bank statements to conceal his activities” and fail to “provide accurate and timely financial reports”. As Walters’ lawsuit argues, none of this can be true, as Walters was never the chief financial officer or treasurer, nor was he ever employed by the foundation.
SAF’s 30-page complaint against Ferguson doesn’t mention Walters at all, and Walters said OpenAI’s ChatGPT model backed out when asked to clarify Walters’ role in the lawsuit. The model would have quoted a non-existent passage of the complaint, also getting the case number wrong. Riehl, the reporter who asked ChatGPT for the case summary, reached out to the lawyers involved in the case to avoid getting the facts wrong and left Walters’ name out of his final story. OpenAI founder Sam Altman previously stated that ChatGPT ‘ hallucinations ‘ are a problem he is working onactively to improve the accuracy of the model. However, the alleged damage was done and one of Walters’ attorneys said the mistake could damage Walters’ reputation, ” exposing him to public hatred, contempt or ridicule .”
What actual damage has been done to the reporter’s reputation?
Gizmodo also consulted University of California Los Angeles law professor Eugene Volokh on the libel case, and it seems he’s not the strongest. Volokh is currently the author of a law review on the legal liability of artificial intelligence models and points out that Walters’ lawsuit does not show what actual damage has been done to his reputation. If Walters were to seek damages from OpenAI, he would have to show that ChatGPT acted with ” awareness of deceit or with reckless disregard of the possibility of deceit “, which would probably be difficult to prove, given that the large language model is onlyan artificial intelligence. However, it’s not the first time OpenAI’s ChatGPT has had a “hallucination” and it probably won’t be the last. Volokh noted that it is certainly possible that someone will win a defamation case against the company in the future if he can prove that he lost a monetary compensation or a job due to an AI-induced “hallucination”.