laitimes

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

author:Those things in the UK

After the launch of ChatGPT, many white-collar workers were terrified.

The text work that used to take days to complete can be easily generated by AI.

We've previously written that 44% of jobs will be replaced by AI, with the legal industry being the "hardest hit by unemployment."

But a recent lawsuit has revived the spirits of legal people,

Fighting a lawsuit with AI may not be that fast, at least not yet.

Because AI is too capable of making things up...

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

The lawsuit itself is so simple that it's too small to be worth mentioning:

In 2019, Roberto Mata flew from El Salvador to New York's JFK International Airport on a Columbia Airlines flight.

On the plane, a flight attendant pushing a metal cart accidentally hit Mata's knee.

Mata yelled that he had been seriously injured and demanded compensation from the airline.

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

A few years later, Mata sued Columbia Airlines in federal court in Manhattan.

The airline said the case had passed its prosecution period and recommended that the judge drop the case.

Naturally, Mata did not agree, and he asked his lawyer to write ten pages in a free stream, citing six cases from 1999 to 2019, and using them to argue why the statute of limitations had expired and the aviation accident case should still be tried.

The cases cited by the lawyer appear to be very standardized, and they include:

Vargis v. China Southern Airlines, 2019;

Shabo v. EgyptAir, 2013;

Peterson v. Iranian Air, 2012;

Martinez v. Delta, 2019;

Deden Properties v. KLM, 2017;

Miller v. United Airlines, 1999.

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

These cases are accompanied by large quotations, and the grudge between the plaintiff and the defendant company is clearly written.

If you hand in this detailed summary of the law, it will definitely touch the heart of Your Honor, right?

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

While Mata and the others were waiting, the lawyer hired by Columbia Airlines was stunned,

He rummaged through the whole Internet, but he couldn't find the case cited by the opposing lawyer!

Counsel for the defence wrote to Judge Custer: "Vargis v. China Southern Airlines does not appear to exist... We couldn't find the case by its title or citation, nor could we find any similar cases. ”

"Although the summary of the law contains a lengthy quote from Vargis, we cannot find the source of this passage."

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

Vaggis's quote refers to the Korean Air case in 2008. He said the case was decided by the U.S. Court of Appeals for the Eleventh Circuit.

But lawyers for the defendants questioned the Eleventh Circuit Court of Appeals, who said they had not decided the case.

After receiving the letter, Judge Custer was also confused,

He searched the case database, but again could not find the source of the case.

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

Judge Custer asked Mata's lawyers to give copies of the case verdicts, and they gave eight documents listing the court, judge, file number and date of each case.

Among them, the judgment in the Vaggis case alone is six pages long, with the name of the author and the court to which he belongs.

Although the name is real, the court also exists,

But...... The verdicts themselves are fictitious, the case simply does not exist!

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

Judge Custer was furious, saying he had never seen anything like this in his many years.

"How unreasonable! Six cases were cited, all of which were false verdicts, false quotations and false internal quotations. ”

Judge Custer ordered an investigation into what was going on. It turned out that the legal brief was not written by Mata's lawyer, Peter LoDuca, but by his old colleague, Steven Schwartz.

Schwartz was a lawyer for 30 years and was first exposed to a forgery case.

He admitted his mistake at the speed of light, saying that these contents were not written by him,

Rather, well... Written by ChatGPT.

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

"I regret my reliance on ChatGPT. I've never used it before in a lawsuit, but this time I did. I don't know if what it gives might be fake. ”

Schwartz's chats with ChatGPT show that he didn't do any case research at all, and that it was all done by AI.

He offers simple ideas, let ChatGPT find cases that support the argument, and then let it extract useful quotes. Finally, stitch up the content given by AI, and the legal summary is finished.

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

Judging from the chat records, Schwartz also had doubts about the authenticity of the case.

S: "Is Vagis a real case?" ”

ChatGPT: "Yes, Vaggis v. China Southern Airlines (11th Circuit, 2019) is a real case. ”

S: "Where is the source?" ”

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

ChatGPT: "Sorry I may have confused you before. Vaggis v. China Southern Airlines is a real case that can be found in legal research databases such as Westlaw and LexisNexis. I'm sorry that my previous answer may have caused you doubts. ”

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

S: "Are there other cases that are false?" ”

ChatGPT: "No. The cases I provide are all real and can be found in the reputable databases Westlaw and LexisNexis. ”

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

Schwartz told the media with some grievances:

"I asked ChatGPT if the message it gave was true, and it said it was."

That is, he believed not only in ChatGPT's ability to search for information, but also in its judgment.

Since it says it's right, it's right.

I don't know whether to say too naïve or too stupid ...

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

Judge Custer ordered a hearing in June that would punish all lawyers implicated in the forgery.

The forgery case is a big scandal, and the law firm is afraid that it will suffer.

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

Stephen Gillers, a professor of legal ethics at New York University, said that while the case looks a bit comedic, the problems behind it are big.

"The legal community has been discussing how to avoid using fake AI information. We can't simply paste the text given by the AI into the court documents, we need to check it ourselves. ”

In fact, what AI is best at is a serious nonsense, and this phenomenon already has a proper vocabulary, called "artificial intelligence illusion".

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

It is clear that something is made out of nothing, it says that it really exists;

It is clearly very wrong, it says exactly right.

This is because AI is essentially just a language prediction model, which predicts the next sentence based on the previous sentence, and does not know the reason and logic in it.

But its confident tone will make humans expect too much from it.

Not long ago, university professor Hall assigned a special assignment in order to let students understand the horror of "hallucinations".

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

He gave students an essay question and asked them to generate an essay using ChatGPT, and then correct the essay.

It turned out that each of the 63 articles in the class had error messages.

"False quotes, false sources, true sources but misunderstood or misspelled. Every assignment has that. I was shocked, I knew the error rate would be high, but I didn't expect it to be that high. ”

American lawyers lazily used ChatGPT to litigate, and tragically overturned! All kinds of cases open their mouths and make up nonsense?

"The biggest takeaway for the students is that people can't rely solely on AI. Before that, many students felt that AI was always right, and they were stunned this time and found it very misleading. About half of the students didn't know that AI could make up lies. After sobering up this time, they were worried and terrified about AI's ability to falsify information. ”

So far, there is no solution to the phenomenon of AI making up lies. All AI companies can do is remind people that the content produced by AI is not entirely trustworthy.

Thinking of this, global white-collar workers should be able to breathe a sigh of relief,

Humans make mistakes, but at least they know what is wrong.

AI's righteous nonsense posture is still a bit out of line...

Read on