logo
AI ‘hallucinations' are a growing problem for the legal profession

AI ‘hallucinations' are a growing problem for the legal profession

You've probably heard the one about the product that blows up in its creators' faces when they're trying to demonstrate how great it is.
Here's a ripped-from-the-headlines yarn about what happened when a big law firm used an AI bot product developed by Anthropic, its client, to help write an expert's testimony defending the client.
It didn't go well. Anthropic's chatbot, Claude, got the title and authors of one paper cited in the expert's statement wrong, and injected wording errors elsewhere. The errors were incorporated in the statement when it was filed in court in April.
Those errors were enough to prompt the plaintiffs suing Anthropic — music publishers who allege that the AI firm is infringing their copyrights by feeding lyrics into Claude to 'train' the bot — to ask the federal magistrate overseeing the case to throw out the expert's testimony in its entirety.
It's may also become a black eye for the big law firm Latham & Watkins, which represents Anthropic and submitted the errant declaration.
Latham argues that the errors were inconsequential, amounting to an 'honest citation mistake and not a fabrication.' The firm's failure to notice the errors before the statement was filed is 'an embarrassing and unintentional mistake,' but it shouldn't be exploited to invalidate the expert's opinion, the firm told Magistrate Judge Susan van Keulen of San Jose, who is managing the pretrial phase of the lawsuit. The plaintiffs, however, say the errors 'fatally undermine the reliability' of the expert's declaration.
At a May 13 hearing conducted by phone, Van Keulen herself expressed doubts.
'There is a world of difference between a missed citation and a hallucination generated by AI, and everyone on this call knows that,' she said, according to a transcript of the hearing cited by the plaintiffs. (Van Keulen hasn't yet ruled on whether to keep the expert's declaration in the record or whether to hit the law firm with sanctions.)
That's the issue confronting judges as courthouse filings peppered with serious errors and even outright fabrications — what AI experts term 'hallucinations' — continue to be submitted in lawsuits.
A roster compiled by the French lawyer and data expert Damien Charlotin now numbers 99 cases from federal courts in two dozen states as well as from courts in Europe, Israel, Australia, Canada and South Africa.
That's almost certainly an undercount, Charlotin says. The number of cases in which AI-generated errors have gone undetected is incalculable, he says: 'I can only cover cases where people got caught.'
In nearly half the cases, the guilty parties are pro-se litigants — that is, people pursuing a case without a lawyer. Those litigants generally have been treated leniently by judges who recognize their inexperience; they seldom are fined, though their cases may be dismissed.
In most of the cases, however, the responsible parties were lawyers. Amazingly, in some 30 cases involving lawyers the AI-generated errors were discovered or were in documents filed as recently as this year, long after the tendency of AI bots to 'hallucinate' became evident. That suggests that the problem is getting worse, not better.
'I can't believe people haven't yet cottoned to the thought that AI-generated material is full of errors and fabrications, and therefore every citation in a filing needs to be confirmed,' says UCLA law professor Eugene Volokh.
Judges have been making it clear that they have had it up to here with fabricated quotes, incorrect references to legal decisions and citations to nonexistent precedents generated by AI bots. Submitting a brief or other document without certifying the truth of its factual assertions, including citations to other cases or court decisions, is a violation of Rule 11 of the Federal Rules of Civil Procedure, which renders lawyers vulnerable to monetary sanctions or disciplinary actions.
Some courts have issued standing orders that the use of AI at any point in the preparation of a filing must be disclosed, along with a certification that every reference in the document has been verified. At least one federal judicial district has forbidden almost any use of AI.
The proliferation of faulty references in court filings also points to the most serious problem with the spread of AI bots into our daily lives: They can't be trusted. Long ago it became evident that when even the most sophisticated AI systems are flummoxed by a question or task, they fill in the blanks in their own knowledge by making things up.
As other fields use AI bots to perform important tasks, the consequences can be dire. Many medical patients 'can be led astray by hallucinations,' a team of Stanford researchers wrote last year. Even the most advanced bots, they found, couldn't back up their medical assertions with solid sources 30% of the time.
It's fair to say that workers in almost any occupation can fall victim to weariness or inattention; but attorneys often deal with disputes with thousands or millions of dollars at stake, and they're expected to be especially rigorous about fact-checking formal submissions.
Some legal experts say there's a legitimate role for AI in the law — even to make decisions customarily left to judges. But lawyers can hardly be unaware of the pitfalls for their own profession in failing to monitor bots' outputs.
The very first sanctions case on Charlotin's list originated in June 2023 — Mata vs. Avianca, a New York personal injury case that resulted in a $5,000 penalty for two lawyers who prepared and submitted a legal brief that was largely the product of the ChatGPT chatbot. The brief cited at least nine court decisions that were soon exposed as nonexistent. The case was widely publicized coast to coast.
One would think fiascos like this would cure lawyers of their reliance on artificial intelligence chatbots to do their work for them. One would be wrong. Charlotin believes that the superficially authentic tone of AI bots' output may encourage overworked or inattentive lawyers to accept bogus citations without double-checking.
'AI is very good at looking good,' he told me. Legal citations follow a standardized format, so 'they're easy to mimic in fake citations,' he says.
It may also be true that the sanctions in the earliest cases, which generally amounted to no more than a few thousand dollars, were insufficient to capture the bar's attention. But Volokh believes the financial consequences of filing bogus citations should pale next to the nonmonetary consequences.
'The main sanctions to each lawyer are the humiliation in front of the judge, in front of the client, in front of supervisors or partners..., possibly in front of opposing counsel, and, if the case hits the news, in front of prospective future clients, other lawyers, etc.,' he told me. 'Bad for business and bad for the ego.'
Charlotin's dataset makes for amusing reading — if mortifying for the lawyers involved. It's peopled by lawyers who appear to be totally oblivious to the technological world they live in.
The lawyer who prepared the hallucinatory ChatGPT filing in the Avianca case, Steven A. Schwartz, later testified that he was 'operating under the false perception that this website could not possibly be fabricating cases on its own.' When he began to suspect that the cases couldn't be found in legal databases because they were fake, he sought reassurance — from ChatGPT!
'Is Varghese a real case?' he texted the bot. Yes, it's 'a real case,' the bot replied. Schwartz didn't respond to my request for comment.
Other cases underscore the perils of placing one's trust in AI.
For example, last year Keith Ellison, the attorney general of Minnesota, hired Jeff Hancock, a communications professor at Stanford, to provide an expert opinion on the danger of AI-faked material in politics. Ellison was defending a state law that made the distribution of such material in political campaigns a crime; the law was challenged in a lawsuit as an infringement of free speech.
Hancock, a well-respected expert in the social harms of AI-generated deepfakes — photos, videos and recordings that seem to be the real thing but are convincingly fabricated — submitted a declaration that Ellison duly filed in court.
But Hancock's declaration included three hallucinated references apparently generated by ChatGPT, the AI bot he had consulted while writing it. One attributed to bogus authors an article he himself had written, but he didn't catch the mistake until it was pointed out by the plaintiffs.
Laura M. Provinzino, the federal judge in the case, was struck by what she called 'the irony' of the episode: 'Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI — in a case that revolves around the dangers of AI, no less.'
That provoked her to anger. Hancock's fake citations, she wrote, 'shatters his credibility with this Court.' Noting that he had attested to the veracity of his declaration under penalty of perjury, she threw out his entire expert declaration and refused to allow Ellison to file a corrected version.
In a mea culpa statement to the court, Hancock explained that the errors might have crept into his declaration when he cut-and-pasted a note to himself. But he maintained that the points he made in his declaration were valid nevertheless. He didn't respond to my request for further comment.
On Feb. 6, Michael R. Wilner, a former federal magistrate serving as a special master in a California federal case against State Farm Insurance, hit the two law firms representing the plaintiff with $31,000 in sanctions for submitting a brief with 'numerous false, inaccurate, and misleading legal citations and quotations.'
In that case, a lawyer had prepared an outline of the brief for the associates assigned to write it. He had used an AI bot to help write the outline, but didn't warn the associates of the bot's role. Consequently, they treated the citations in the outline as genuine and didn't bother to double-check them.
As it happened, Wilner noted,'approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way.' He chose not to sanction the individual lawyers: 'This was a collective debacle,' he wrote.
Wilner added that when he read the brief, the citations almost persuaded him that the plaintiff's case was sound — until he looked up the cases and discovered they were bogus. 'That's scary,' he wrote. His monetary sanction for misusing AI appears to be the largest in a U.S. court ... so far.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns
Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns

Yahoo

time14 minutes ago

  • Yahoo

Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns

The High Court of England and Wales says lawyers need to take stronger steps to prevent the misuse of artificial intelligence in their work. In a ruling tying together two recent cases, Judge Victoria Sharp wrote that generative AI tools like ChatGPT 'are not capable of conducting reliable legal research." 'Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,' Judge Sharp wrote. 'The responses may make confident assertions that are simply untrue.' That doesn't mean lawyers cannot use AI in their research, but she said they have a professional duty 'to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work.' Judge Sharp suggested that the growing number of cases where lawyers (including, on the U.S. side, lawyers representing major AI platforms) have cited what appear to be AI-generated falsehoods suggests that 'more needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court,' and she said her ruling will be forwarded to professional bodies including the Bar Council and the Law Society. In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations — 18 of those cases did not exist, while many others 'did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application,' Judge Sharp said. In the other, a lawyer representing a man who had been evicted from his London home wrote a court filing citing five cases that did not appear to exist. (The lawyer denied using AI, though she said the citations may have come from AI-generated summaries that appeared in 'Google or Safari.') Judge Sharp said that while the court decided not to initiate contempt proceedings, that is 'not a precedent.' 'Lawyers who do not comply with their professional obligations in this respect risk severe sanction,' she added. Both lawyers were either referred or referred themselves to professional regulators. Judge Sharp noted that when lawyers do not meet their duties to the court, the court's powers range from 'public admonition' to the imposition of costs, contempt proceedings, or even 'referral to the police.' Error in retrieving data Sign in to access your portfolio Error in retrieving data

Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns
Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns

TechCrunch

time25 minutes ago

  • TechCrunch

Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns

The High Court of England and Wales says lawyers need to take stronger steps to prevent the misuse of artificial intelligence in their work. In a ruling tying together two recent cases, Judge Victoria Sharp wrote that generative AI tools like ChatGPT 'are not capable of conducting reliable legal research.' 'Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,' Judge Sharp wrote. 'The responses may make confident assertions that are simply untrue.' That doesn't mean lawyers cannot use AI in their research, but she said they have a professional duty 'to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work.' Judge Sharp suggested that the growing number of cases where lawyers (including, on the U.S. side, lawyers representing major AI platforms) have cited what appear to be AI-generated falsehoods suggests that 'more needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court,' and she said her ruling will be forwarded to professional bodies including the Bar Council and the Law Society. In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations — 18 of those cases did not exist, while many others 'did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application,' Judge Sharp said. In the other, a lawyer representing a man who had been evicted from his London home wrote a court filing citing five cases that did not appear to exist. (The lawyer denied using AI, though she said the citations may have come from AI-generated summaries that appeared in 'Google or Safari.') Judge Sharp said that while the court decided not to initiate contempt proceedings, that is 'not a precedent.' Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW 'Lawyers who do not comply with their professional obligations in this respect risk severe sanction,' she added. Both lawyers were either referred or referred themselves to professional regulators. Judge Sharp noted that when lawyers do not meet their duties to the court, the court's powers range from 'public admonition' to the imposition of costs, contempt proceedings, or even 'referral to the police.'

Got $3,000? 1 Artificial Intelligence (AI) Stock to Buy and Hold for the Long Term.
Got $3,000? 1 Artificial Intelligence (AI) Stock to Buy and Hold for the Long Term.

Yahoo

timean hour ago

  • Yahoo

Got $3,000? 1 Artificial Intelligence (AI) Stock to Buy and Hold for the Long Term.

This dominant internet enterprise isn't new to artificial intelligence (AI), as it's been working on this technology for decades. The ability to generate extremely huge profits helps fund sizable investments to build out AI infrastructure. Shares trade at a 22% discount to the S&P 500, a deal that shouldn't be overlooked. 10 stocks we like better than Alphabet › The artificial intelligence (AI) boom is showing no signs of letting up. Executive teams want to leverage this technology, while employees are worried about how it could affect their jobs. And then there are investors that continue to look for ways to profit from this trend. Picking the right business could be a boon for your portfolio. If you have $3,000 ready to invest right now, here's one AI stock to buy and hold for the long term. "We will move from mobile-first to an AI-first world," CEO Sundar Pichai of Alphabet's (NASDAQ: GOOGL) (NASDAQ: GOOG) then-Google division said in the company's 2015 letter to shareholders. This was to outline a fresh strategic focus and outlook of the tech landscape. Looking back with the benefit of hindsight, it's quite remarkable how prescient this perspective was. If we go even further back, Google was using machine learning capabilities in 2001 to help users with their spelling within its popular search engine. While everyone else seems to finally be coming around to the AI craze, Alphabet has been working on this technology for quite some time. This has become more notable recently, with different platforms leveraging AI to better serve users. For example, Search allows users to conduct queries with their cameras, Maps uses AI to provide traffic info, and YouTube can come up with captions for content creators. These are clear examples of AI helping improve the user experience. At its developer conference in May, one notable update that Alphabet announced was Agent Mode. Soon to be released, this tool can handle complex, multistep tasks from start to finish by conducting different activities like surfing the web or doing deep research. Waymo, Alphabet's autonomous vehicle (AV) and robotaxi unit, also leans heavily on AI when completing rides and ensuring a safe trip. It's also used for training and advancing the AV tech. Perhaps no segment has a greater opportunity in AI than Google Cloud. Cloud computing is a major growth market, as more IT spending shifts from on-site to off-premises. This has provided a tailwind. However, now that more companies are realizing that they must incorporate AI within their own operations, it makes Google Cloud even more critical as a vendor. During the first quarter of 2025, 74% of Alphabet's revenue, or $67 billion, came from digital advertising efforts. AI is helping these important customers by building automated ad campaigns in a budget-friendly way, for example. Alphabet is undoubtedly all-in on the AI transition. It's working on this technology to not only improve its existing products and services, but to create entirely new tools for users and customers to benefit from. That strategic focus positions it well for the future. Based on these factors, it's understandable if you're starting to think that Alphabet might be the best AI stock to own. However, there are other reasons to appreciate this business and opportunity. Alphabet is in incredible financial shape. Even after sizable capital expenditures of $53 billion were made in 2024, the company still managed to bring in $73 billion in free cash flow. It generates unbelievable earnings that allow it to keep plowing more money into things like servers and data centers. Critics will say that this is wasteful spending, but it's a risk worth taking to ensure the business stays ahead of the curve. The current valuation is also too hard to ignore. As of this writing, shares are trading at a forward price-to-earnings ratio of 17.5. This multiple represents a 22% discount to the overall S&P 500. All this means investing $3,000 in the stock today and holding for the long term is a smart move. Before you buy stock in Alphabet, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Alphabet wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $669,517!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $868,615!* Now, it's worth noting Stock Advisor's total average return is 792% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Neil Patel has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet. The Motley Fool has a disclosure policy. Got $3,000? 1 Artificial Intelligence (AI) Stock to Buy and Hold for the Long Term. was originally published by The Motley Fool

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store