Michael Hiltzik: AI ‘hallucinations' are a growing problem for the legal profession
You've probably heard the one about the product that blows up in its creators' faces when they're trying to demonstrate how great it is.
Here's a ripped-from-the-headlines yarn about what happened when a big law firm used an AI bot product developed by Anthropic, its client, to help write an expert's testimony defending the client.
It didn't go well. Anthropic's chatbot, Claude, got the title and authors of one paper cited in the expert's statement wrong, and injected wording errors elsewhere. The errors were incorporated in the statement when it was filed in court in April.
Those errors were enough to prompt the plaintiffs suing Anthropic - music publishers who allege that the AI firm is infringing their copyrights by feeding lyrics into Claude to "train" the bot - to ask the federal magistrate overseeing the case to throw out the expert's testimony in its entirety.
It may also become a black eye for the big law firm Latham & Watkins, which represents Anthropic and submitted the errant declaration.
Latham argues that the errors were inconsequential, amounting to an "honest citation mistake and not a fabrication." The firm's failure to notice the errors before the statement was filed is "an embarrassing and unintentional mistake," but it shouldn't be exploited to invalidate the expert's opinion, the firm told Magistrate Judge Susan van Keulen of San Jose, who is managing the pretrial phase of the lawsuit. The plaintiffs, however, say the errors "fatally undermine the reliability" of the expert's declaration.
At a May 13 hearing conducted by phone, Van Keulen herself expressed doubts.
"There is a world of difference between a missed citation and a hallucination generated by AI, and everyone on this call knows that," she said, according to a transcript of the hearing cited by the plaintiffs. (Van Keulen hasn't yet ruled on whether to keep the expert's declaration in the record or whether to hit the law firm with sanctions.)
That's the issue confronting judges as courthouse filings peppered with serious errors and even outright fabrications - what AI experts term "hallucinations" - continue to be submitted in lawsuits.
A roster compiled by the French lawyer and data expert Damien Charlotin now numbers 99 cases from federal courts in two dozen states as well as from courts in Europe, Israel, Australia, Canada and South Africa.
That's almost certainly an undercount, Charlotin says. The number of cases in which AI-generated errors have gone undetected is incalculable, he says: "I can only cover cases where people got caught."
In nearly half the cases, the guilty parties are pro-se litigants - that is, people pursuing a case without a lawyer. Those litigants generally have been treated leniently by judges who recognize their inexperience; they seldom are fined, though their cases may be dismissed.
In most of the cases, however, the responsible parties were lawyers. Amazingly, in some 30 cases involving lawyers the AI-generated errors were discovered or were in documents filed as recently as this year, long after the tendency of AI bots to "hallucinate" became evident. That suggests that the problem is getting worse, not better.
"I can't believe people haven't yet cottoned to the thought that AI-generated material is full of errors and fabrications, and therefore every citation in a filing needs to be confirmed," says UCLA law professor Eugene Volokh.
Judges have been making it clear that they have had it up to here with fabricated quotes, incorrect references to legal decisions and citations to nonexistent precedents generated by AI bots. Submitting a brief or other document without certifying the truth of its factual assertions, including citations to other cases or court decisions, is a violation of Rule 11 of the Federal Rules of Civil Procedure, which renders lawyers vulnerable to monetary sanctions or disciplinary actions.
Some courts have issued standing orders that the use of AI at any point in the preparation of a filing must be disclosed, along with a certification that every reference in the document has been verified. At least one federal judicial district has forbidden almost any use of AI.
The proliferation of faulty references in court filings also points to the most serious problem with the spread of AI bots into our daily lives: They can't be trusted. Long ago it became evident that when even the most sophisticated AI systems are flummoxed by a question or task, they fill in the blanks in their own knowledge by making things up.
As other fields use AI bots to perform important tasks, the consequences can be dire. Many medical patients "can be led astray by hallucinations," a team of Stanford researchers wrote last year. Even the most advanced bots, they found, couldn't back up their medical assertions with solid sources 30% of the time.
It's fair to say that workers in almost any occupation can fall victim to weariness or inattention; but attorneys often deal with disputes with thousands or millions of dollars at stake, and they're expected to be especially rigorous about fact-checking formal submissions.
Some legal experts say there's a legitimate role for AI in the law - even to make decisions customarily left to judges. But lawyers can hardly be unaware of the pitfalls for their own profession in failing to monitor bots' outputs.
The very first sanctions case on Charlotin's list originated in June 2023 - Mata vs. Avianca, a New York personal injury case that resulted in a $5,000 penalty for two lawyers who prepared and submitted a legal brief that was largely the product of the ChatGPT chatbot. The brief cited at least nine court decisions that were soon exposed as nonexistent. The case was widely publicized coast to coast.
One would think fiascos like this would cure lawyers of their reliance on artificial intelligence chatbots to do their work for them. One would be wrong. Charlotin believes that the superficially authentic tone of AI bots' output may encourage overworked or inattentive lawyers to accept bogus citations without double-checking.
"AI is very good at looking good," he told me. Legal citations follow a standardized format, so "they're easy to mimic in fake citations," he says.
It may also be true that the sanctions in the earliest cases, which generally amounted to no more than a few thousand dollars, were insufficient to capture the bar's attention. But Volokh believes the financial consequences of filing bogus citations should pale next to the nonmonetary consequences.
"The main sanctions to each lawyer are the humiliation in front of the judge, in front of the client, in front of supervisors or partners..., possibly in front of opposing counsel, and, if the case hits the news, in front of prospective future clients, other lawyers, etc.," he told me. "Bad for business and bad for the ego."
Charlotin's dataset makes for amusing reading - if mortifying for the lawyers involved. It's peopled by lawyers who appear to be totally oblivious to the technological world they live in.
The lawyer who prepared the hallucinatory ChatGPT filing in the Avianca case, Steven A. Schwartz, later testified that he was "operating under the false perception that this website could not possibly be fabricating cases on its own." When he began to suspect that the cases couldn't be found in legal databases because they were fake, he sought reassurance - from ChatGPT!
"Is Varghese a real case?" he texted the bot. Yes, it's "a real case," the bot replied. Schwartz didn't respond to my request for comment.
Other cases underscore the perils of placing one's trust in AI.
For example, last year Keith Ellison, the attorney general of Minnesota, hired Jeff Hancock, a communications professor at Stanford, to provide an expert opinion on the danger of AI-faked material in politics. Ellison was defending a state law that made the distribution of such material in political campaigns a crime; the law was challenged in a lawsuit as an infringement of free speech.
Hancock, a well-respected expert in the social harms of AI-generated deepfakes - photos, videos and recordings that seem to be the real thing but are convincingly fabricated - submitted a declaration that Ellison duly filed in court.
But Hancock's declaration included three hallucinated references apparently generated by ChatGPT, the AI bot he had consulted while writing it. One attributed to bogus authors an article he himself had written, but he didn't catch the mistake until it was pointed out by the plaintiffs.
Laura M. Provinzino, the federal judge in the case, was struck by what she called "the irony" of the episode: "Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI - in a case that revolves around the dangers of AI, no less."
That provoked her to anger. Hancock's fake citations, she wrote, "shatters his credibility with this Court." Noting that he had attested to the veracity of his declaration under penalty of perjury, she threw out his entire expert declaration and refused to allow Ellison to file a corrected version.
In a mea culpa statement to the court, Hancock explained that the errors might have crept into his declaration when he cut-and-pasted a note to himself. But he maintained that the points he made in his declaration were valid nevertheless. He didn't respond to my request for further comment.
On Feb. 6, Michael R. Wilner, a former federal magistrate serving as a special master in a California federal case against State Farm Insurance, hit the two law firms representing the plaintiff with $31,000 in sanctions for submitting a brief with "numerous false, inaccurate, and misleading legal citations and quotations."
In that case, a lawyer had prepared an outline of the brief for the associates assigned to write it. He had used an AI bot to help write the outline, but didn't warn the associates of the bot's role. Consequently, they treated the citations in the outline as genuine and didn't bother to double-check them.
As it happened, Wilner noted,"approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way." He chose not to sanction the individual lawyers: "This was a collective debacle," he wrote.
Wilner added that when he read the brief, the citations almost persuaded him that the plaintiff's case was sound - until he looked up the cases and discovered they were bogus. "That's scary," he wrote. His monetary sanction for misusing AI appears to be the largest in a U.S. court ... so far.
Copyright (C) 2025, Tribune Content Agency, LLC. Portions copyrighted by the respective providers.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
23 minutes ago
- Yahoo
New on Netflix June 7-13: our TV expert picks 6 new movies and shows to stream this week
When you buy through links on our articles, Future and its syndication partners may earn a commission. It's documentary week over at Netflix — unofficially, I might add, but with the start of a two-month season of documentary films and several other high-profile non-fiction releases, it's a great time to catch up on events in the real world. I'm What to Watch's streaming editor and every week, I look through the list of what's new on Netflix to create a curated list of the new Netflix Originals you should be watching. I only include ones that would be worthy of our lists of the best Netflix shows and best Netflix movies, bringing you only the top picks. This week my list covers new uploads arriving between Saturday, June 7 and Friday, June 13, inclusive. If you like the sound of something, add it to your watchlist, but even if it's not out yet Netflix hosts landing pages to near releases which you can use to set a reminder for when it releases. The docu-craze isn't exclusive to Netflix with most other streaming services focusing on non-fiction. The reason is that, during the summer months, there's often a lull in fiction and drama releases, however I've also included a romantic sci-fi movie and an action series if you want something that's fictional. So let's find you something to watch on Netflix this week... Netflix begins a season of one-per-week documentary movies about headline events with The Astroworld Tragedy, about a well-reported tragedy from 2021. TAT tells the tale of the 2021 crowd crush at the Astroworld music festival which ended in the deaths of 10 people. It examines what happened from the perspective of survivors and first responders featuring plenty of interviews along the way. Releases on Tuesday, June 10 Next up we've got a 90-minute movie from Mexico that Netflix is billing as a sci-fi-rom-com. Our Times is about two married physicists in 1966 who are a bit too good at their job, and accidentally end up travelling to 2025. One loves the modern day yet the other wants to return to the past, and they have to question whether their marriage is worth more than living in the time they feel comfortable. Releases on Wednesday, June 11 Our next documentary of the week is a French docuseries about a worldwide topic. Ahead of release there's no word on how many episodes. Cocaine Air: Smugglers at 30,000ft is about four French people who were arrested at Punta Ana airport in 2013 after they were discovered smuggling 700kg of cocaine. While none of them fit the bill of a drug smuggling heavyweight and they all pleaded innocence as to the owner of the coke, it prompted officials to dig deeper into what was happening. Releases on Wednesday, June 11 Here's a new documentary which likely needs no introduction: Titan: The OceanGate Disaster is Netflix's long-awaited movie about the infamous OceanGate submersible implosion from 2023. Titan will tell the story of the OceanGate company as its figurehead Stockton Rush tried to pioneer the industry of deep-sea tourism. This culminates in the Titan sub's disappearance which the documentary will also explore in detail. Releases on Wednesday, June 11 For a hot minute in 2023, Netflix crowned Arnold Schwarzenegger its "Chief Action Officer" as the Governator seemed poised to renew his career on the streamer. In practice this only really resulted in his TV show Fubar which gets a second season this week. In Fubar, Schwarzenegger plays a retired CIA operative who comes in from the cold and is forced to team up with his estranged daughter and her team of strange characters. In season 2 he's forced to save the world from a past lover with a bone to pick, but this nemesis is just as interested in ruining his relationships as the world. Releases on Thursday, June 12 The latest international version of Netflix's Too Hot To Handle is here, and it features a cast of 10 Spaniards looking for romance... or something else. As always, Too Hot to Handle tasks unwitting contestants of a romance reality show with abstaining from physical affection. A grand sum is on the line but the amount will drop every time the contestants break the rules, and the series is no stranger to having every penny drained from the pot. Releases on Friday, June 13


Forbes
26 minutes ago
- Forbes
New PSA Uses AI Fakery To Warn About AI Fakery
Remember when a boat crew rescued a polar bear last year, but really didn't because the widely ... More shared video was AI-generated? For a greatest-hits parade of viral AI images, just watch a new public service announcement designed to help people spot AI fakery. You'll get such classics as a skydiving baby, the late Pope Francis wearing a designer puffer coat, a boat crew not actually rescuing a polar bear cub and, of course, Will Smith slurping spaghetti. That meme from a couple of years back became an early benchmark of just how far AI images had to go to appear convincing. A lot has changed since then, with AI tools for generating images and videos progressing toward realism at an astounding rate. The advancements open up countless creative possibilities, but with them comes a higher risk that people will confuse fake for real, sometimes with serious consequences. 'At stake is trust in substantiated information and the health of democracy,' said Betsy Morais, acting editor of the Columbia Journalism Review, the magazine for journalists published by Columbia University's Graduate School of Journalism. That's what prompted CJR to launch PSAi, a campaign aimed at helping the public distinguish real from AI-generated. 'For more than a century, photography has been one of the greatest tools journalists have had to establish truth and trust,' reads a description of the campaign. 'But AI-generated images are making it increasingly hard to distinguish real from fake media.' As part of PSAi, CJR partnered with creative agency TBWA\Chiat\Day New York to produce a short, snappy music video that serves as a crash course in AI-spotting basics. The video pairs some of the best-known AI-generated images out there with a song in the style of '90s rap delivering tips that might seem obvious to those attuned to AI oddities, but aren't to the scores who get fooled. 'Extra fingers are a telltale sign, so is skin with the perfect shine,' the lyrics go. 'Flawless hair that's smooth and clean, warped faces in the back of the scene.' To make the video, the TBWA\Chiat\Day team scoured social media platforms to collect AI-generated images ranging from the emotionally charged — a terrified little girl paddling through the floodwaters of Hurricane Katrina holding her dog — to the downright bizarre (I'm looking at you 'Shrimp Jesus'). The images were chosen not only because they were 'liked' and shared widely, but because they misled people into believing they were legitimate. The team animated the images using Runway's AI video generation and alteration tools, specifically its Act-One feature, which makes it possible to animate facial expressions and sync mouth movements with speech. In the case of the PSAi video, image subjects appear to be lip-synching the rap lyrics. 'That's the way you spot AI,' Meta CEO Mark Zuckerberg appears to sing as he peeps into a window from outside (remember that one?). An iProov study earlier this year revealed that many people can't spot a deepfake, with 20% of consumers polled saying they don't even know what one is. Dustin Tomes, chief creative officer of TBWA\Chiat\Day New York, stressed that PSAi isn't intended as an anti-AI statement, but an awareness-building toolkit. 'The PSAi is designed to give people simple, effective tools to spot the difference, without requiring too much effort,' Tomes said in a statement. 'This isn't a silver bullet, but by delivering this education in a memorable way, we hope more people will engage and apply these lessons. After all, fake news doesn't spread itself.' The video certainly is attention-grabbing, even without an appearance by the internet's favorite viral 'emotional support kangaroo.' The creative team behind PSAi scoured social media for AI-generated images that were liked and ... More shared widely, like this one of Meta CEO Mark Zuckerberg.

Elle
31 minutes ago
- Elle
Coco Gauff Is the 2025 French Open Women's Singles Champion
On Saturday, June 7, Coco Gauff took home her second Grand Slam title, winning the 2025 French Open against world No. 1 Aryna Sabalenka with a final score of 6(5)-7(7), 6-2, 6-4 at Roland Garros. Gauff also won the U.S. Open in 2023. It was an emotional moment for the tennis champion, who celebrated on the court before running to her mother and father, Candi and Corey Gauff, who were watching in the audience. After receiving her trophy, Gauff was interviewed by Sloane Stephens, where she praised her family for keeping 'her grounded.' 'You guys keep me grounded and give me the belief,' she said. 'Thanks, and I love you guys. To my brothers at home, you are the reason I do this, and you inspire me more than you know.' She continued, 'Thanks to everyone who made this tournament happen, and thanks to you guys! You really helped me today, you were cheering for me so hard. I'm not sure what I do to deserve so much love from the French crowd but merci beaucoup.' Gauff finished out her speech with a reference to rapper Tyler the Creator. 'Honestly, I didn't think I could do it, but I'm going to quote Tyler the Creator right here and he said 'If I ever told you I had a doubt inside me, I must be lying,'' she said. 'I was lying to myself, so thanks to Tyler, and thanks to you guys.' In October 2024, Tyler actually threw in a reference to Gauff on his album CHROMAKOPIA in the track 'Thought I Was Dead' with ScHoolboy Q and Santigold, saying in the lyrics, 'You ain't Coco Gauff, you can't serve me.' Gauff reacted to the song at the time on TikTok, saying, 'Biased opinion: it's fire. Unbiased opinion: it's fire. Anyways, don't take my word for it, go listen for yourself.'