OpenAI Advocates Novel Strategy to Combat ‘AI Hallucinations’

OpenAI Advocates Novel Strategy to Combat ‘AI Hallucinations’

KEY POINTS

ChatGPT maker OpenAI has announced a new approach called “process supervision” to prevent “hallucinations” in artificial intelligence (AI) models. This follows an incident where OpenAI’s large language model produced fabricated cases for a New York lawyer. The new approach rewards OpenAI models for each step of the process where they get a correct answer, rather than rewarding the model for the overall final answer. OpenAI researchers believe that mitigating an AI model’s logical mistakes would help the models become more capable of solving reasoning problems. However, some experts are skeptical about whether the process supervision approach would do enough to stop an AI model from hallucinating.

OpenAI researchers noted that LLMs have “greatly improved” in performing multi-step reasoning over the last few years, but they still produce “logical mistakes” called “hallucinations.”

Emerging tech journalist for Quartz, Michelle Cheng, described AI hallucination as an instance wherein an AI model provides “inaccurate information or fake stuff.”

Government and company strategist Bernard Marr referred to hallucination in the tech as the generation of outputs that may sound correct but are “either factually incorrect or unrelated to the given context.”

The paper’s release came days after reports emerged about a lawyer in New York faced with possible sanctions after he drafted a legal brief using non-existent court cases provided by ChatGPT.

Attorney Steven Schwartz of Levidow, Levidow & Oberman is due to appear in court for a sanctions hearing on June 8 after he admitted to using ChatGPT for a brief in a client’s personal injury case against Avianca Airlines, Reuters reported.

Earlier this week, Judge Brantley Starr of Texas added a new requirement for any attorney appearing in his court: no use of ChatGPT or any other generative AI.

If a filing was drafted by AI, the attorney should make sure the filing was checked “by a human being” for accuracy, Starr said in the order, according to legal scholar Eugene Volokh.

In mid-March, OpenAI CEO Sam Altman said in an interview with ABC News that he believes AI was the “greatest technology humanity has yet developed” but that the company was “a little bit scared of this.”

Last month, Altman appeared before lawmakers at Capitol Hill to discuss AI. He said OpenAI wanted to be “clear” about the “downside” of AI and he was willing to cooperate with the government to mitigate the tech’s risks.

The company’s leadership also recently proposed that the government establish a regulatory body similar to the International Atomic Energy Agency (IAEA) to keep the technology in check.

2023-06-06 18:00:04
Link from www.ibtimes.com

Exit mobile version