KEY POINTS

  • OpenAI said 'process supervision' trains AI models to produce information 'endorsed' by humans
  • Some experts believe the approach won't do anything significant to mitigate AI hallucinations
  • A lawyer faces possible sanctions for using non-existent court decisions produced by ChatGPT

ChatGPT maker OpenAI has announced that it has found a new approach that will help prevent "hallucinations" in artificial intelligence following an incident wherein the AI leader's large language model (LLM) produced fabricated cases for a New York lawyer.

OpenAI models will now be rewarded for each step of the process where they get a correct answer instead of rewarding the model for the overall final answer, researchers at OpenAI said in a paper published Wednesday. The company called the new approach process supervision, as opposed to its previous approach called outcome supervision.

"In addition to boosting performance relative to outcome supervision, process supervision also has an important alignment benefit: it directly trains the model to produce a chain-of-thought that is endorsed by humans," the researchers said.

OpenAI researchers noted that LLMs have "greatly improved" in performing multi-step reasoning over the last few years, but they still produce "logical mistakes" called "hallucinations."

Emerging tech journalist for Quartz, Michelle Cheng, described AI hallucination as an instance wherein an AI model provides "inaccurate information or fake stuff."

Government and company strategist Bernard Marr referred to hallucination in the tech as the generation of outputs that may sound correct but are "either factually incorrect or unrelated to the given context."

Marr provided an example wherein a user asked ChatGPT when Leonardo da Vinci painted the Mona Lisa, to which the OpenAI chatbot said the painting was created in 1815. According to Getty Center, the Mona Lisa was painted between 1503 and 1506.

While OpenAI researchers believe mitigating an AI model's logical mistakes would play a crucial role in helping the models become more capable of solving reasoning problems, there is skepticism about whether the process supervision approach would do enough to stop an AI model from hallucinating.

"I just don't think that this alone does any significant mitigation of concerns about misinformation and incorrect results ... when it's actually being used in the wild," Ben Winters, leader of the AI and human rights project at the Electronic Privacy Information Center, told CNBC.

Suresh Venkatasubramanian, director of the center for technology responsibility at Brown University, told the outlet that he believes the paper was more of a preliminary observation since it was unclear if the paper was peer-reviewed.

Karl Cobbe, mathgen researcher at OpenAI, clarified to CNBC that OpenAI did not invent the process supervision approach but the AI leader was helping to push the approach in the AI space.

The paper's release came days after reports emerged about a lawyer in New York faced with possible sanctions after he drafted a legal brief using non-existent court cases provided by ChatGPT.

Attorney Steven Schwartz of Levidow, Levidow & Oberman is due to appear in court for a sanctions hearing on June 8 after he admitted to using ChatGPT for a brief in a client's personal injury case against Avianca Airlines, Reuters reported.

Schwartz said in a filing that he "never" used the chatbot prior to the case in question and that he did not intend to "deceive" Avianca or the court. He added that in his over 30 years of practice, he has never been cited for any legal misconduct before the incident.

The New York lawyer said he "greatly regrets" using generative AI and will never do it again without "absolute verification" of a model's authenticity.

Earlier this week, Judge Brantley Starr of Texas added a new requirement for any attorney appearing in his court: no use of ChatGPT or any other generative AI.

If a filing was drafted by AI, the attorney should make sure the filing was checked "by a human being" for accuracy, Starr said in the order, according to legal scholar Eugene Volokh.

It is unclear if Judge Starr issued the order after the case of Schwartz came to light.

In mid-March, OpenAI CEO Sam Altman said in an interview with ABC News that he believes AI was the "greatest technology humanity has yet developed" but that the company was "a little bit scared of this."

Last month, Altman appeared before lawmakers at Capitol Hill to discuss AI. He said OpenAI wanted to be "clear" about the "downside" of AI and he was willing to cooperate with the government to mitigate the tech's risks.

The company's leadership also recently proposed that the government establish a regulatory body similar to the International Atomic Energy Agency (IAEA) to keep the technology in check.

Illustration shows OpenAI and ChatGPT logos
ChatGPT was recently put into the spotlight after a lawyer admitted to using the chatbot to draft a brief, and unfortunately, the generative AI model "hallucinated" and produced non-existent court decisions. Reuters