Lawyers say ChatGPT duped them into quoting fake case law 2023

Two remorseful lawyers responding to an outraged judge in Manhattan federal court accused ChatGPT Thursday for misleading them into filing bogus legal research.

Attorneys Steven A. Schwartz and Peter LoDuca may be punished for a lawsuit against an airline that referenced earlier court decisions Schwartz thought were genuine but were really fabricated by the chatbot.

Schwartz utilized the breakthrough technology to find legal precedents supporting a client’s complaint against Avianca for a 2019 flight injury.

The chatbot, which has intrigued the globe with its essay-like responses to user inquiries, identified many aircraft catastrophe instances Schwartz hadn’t found through his legal firm’s typical techniques.

Several of the cases were fake or involved nonexistent airlines.

Schwartz told U.S. District Judge P. Kevin Castel he “operated under a misconception… that this website was obtaining these cases from some source I did not have access to.”

He “failed miserably” at checking citations.

Schwartz stated he didn’t realize ChatGPT might fake cases.

Microsoft spent $1 billion in ChatGPT’s OpenAI.

Its achievement, showing how artificial intelligence may revolutionize how people work and learn, has alarmed some. In May, hundreds of business leaders wrote that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Judge Castel was puzzled and dismayed that the lawyers did not immediately fix the phony legal citations when Avianca’s counsel and the court informed them to the issue. March filings by Avianca noted the fake case law.

The judge presented Schwartz with a computer-generated legal case. It started as a woman’s wrongful death action against an airline but turned into a man’s claim for missing a trip to New York and having to pay more.

“Isn’t that legal gibberish?” Castel asked.

Schwartz mistook passages from different portions of the argument for the confused presentation.

Castel asked Schwartz if he had any more questions after finishing.

Schwartz apologized profusely.

He felt “embarrassed, humiliated and extremely remorseful” after the mistake, which hurt him personally and professionally.

He and Levidow, Levidow & Oberman have taken steps to prevent a repeat.

LoDuca, another lawyer on the case, claimed he trusted Schwartz and didn’t evaluate his work.

“It never dawned on me that this was a bogus case,” LoDuca said after the court read sections of a referenced case to demonstrate its “gibberish.”

He called the result “painful.”

Ronald Minkoff, a law firm attorney, told the judge that the submission “resulted from carelessness, not bad faith” and should not be sanctioned.

He said attorneys struggle with technology, especially new technology, “and it’s not getting easier.”

Mr. Schwartz, who seldom performs government research, picked this new technology. “He thought it was a standard search engine,” Minkoff added. “He played with live ammo.”

Daniel Shin, an adjunct professor and assistant director of research at William & Mary Law School’s Center for Legal and Court Technology, said he introduced the Avianca case at a conference last week that attracted dozens of participants in person and online from state and federal courts in the U.S., including Manhattan federal court.

The conference was shocked and confused.

“We’re talking about the Southern District of New York, the federal district that handles 9/11 to all the big financial crimes,” Shin added. This was the first case of legal malpractice employing generative AI.

He stated the instance showed how ChatGPT hallucinates, talking about imaginary topics in a realistic way.

Shin warned against employing promising AI technology without comprehending the hazards.

The judge indicated he’ll decide punishments later.

You May Also Like

About the Author: Sanjh Vishwakarma

Leave a Reply