Hiltzik: AI 'hallucinations' are a growing problem for the legal profession
You've probably heard the one about the product that blows up in its creators' faces when they're trying to demonstrate how great it is.
Here's a ripped-from-the-headlines yarn about what happened when a big law firm used an AI bot product developed by Anthropic, its client, to help write an expert's testimony defending the client.
It didn't go well. Anthropic's chatbot, Claude, got the title and authors of one paper cited in the expert's statement wrong, and injected wording errors elsewhere. The errors were incorporated in the statement when it was filed in court in April.
I can't believe people haven't yet cottoned to the thought that AI-generated material is full of errors and fabrications, and therefore every citation in a filing needs to be confirmed.
Eugene Volokh, UCLA law school
Those errors were enough to prompt the plaintiffs suing Anthropic — music publishers who allege that the AI firm is infringing their copyrights by feeding lyrics into Claude to "train" the bot — to ask the federal magistrate overseeing the case to throw out the expert's testimony in its entirety.
It's may also become a black eye for the big law firm Latham & Watkins, which represents Anthropic and submitted the errant declaration.
Latham argues that the errors were inconsequential, amounting to an "honest citation mistake and not a fabrication." The firm's failure to notice the errors before the statement was filed is "an embarrassing and unintentional mistake," but it shouldn't be exploited to invalidate the expert's opinion, the firm told Magistrate Judge Susan van Keulen of San Jose, who is managing the pretrial phase of the lawsuit. The plaintiffs, however, say the errors "fatally undermine the reliability" of the expert's declaration.
At a May 13 hearing conducted by phone, Van Keulen herself expressed doubts.
"There is a world of difference between a missed citation and a hallucination generated by AI, and everyone on this call knows that," she said, according to a transcript of the hearing cited by the plaintiffs. (Van Keulen hasn't yet ruled on whether to keep the expert's declaration in the record or whether to hit the law firm with sanctions.)
That's the issue confronting judges as courthouse filings peppered with serious errors and even outright fabrications — what AI experts term "hallucinations" — continue to be submitted in lawsuits.
A roster compiled by the French lawyer and data expert Damien Charlotin now numbers 99 cases from federal courts in two dozen states as well as from courts in Europe, Israel, Australia, Canada and South Africa.
That's almost certainly an undercount, Charlotin says. The number of cases in which AI-generated errors have gone undetected is incalculable, he says: "I can only cover cases where people got caught."
Read more: Hiltzik: This artificial intelligence chatbot turns out to be a plagiarist — and an idiot
In nearly half the cases, the guilty parties are pro-se litigants — that is, people pursuing a case without a lawyer. Those litigants generally have been treated leniently by judges who recognize their inexperience; they seldom are fined, though their cases may be dismissed.
In most of the cases, however, the responsible parties were lawyers. Amazingly, in some 30 cases involving lawyers the AI-generated errors were discovered or were in documents filed as recently as this year, long after the tendency of AI bots to "hallucinate" became evident. That suggests that the problem is getting worse, not better.
"I can't believe people haven't yet cottoned to the thought that AI-generated material is full of errors and fabrications, and therefore every citation in a filing needs to be confirmed," says UCLA law professor Eugene Volokh.
Judges have been making it clear that they have had it up to here with fabricated quotes, incorrect references to legal decisions and citations to nonexistent precedents generated by AI bots. Submitting a brief or other document without certifying the truth of its factual assertions, including citations to other cases or court decisions, is a violation of Rule 11 of the Federal Rules of Civil Procedure, which renders lawyers vulnerable to monetary sanctions or disciplinary actions.
Some courts have issued standing orders that the use of AI at any point in the preparation of a filing must be disclosed, along with a certification that every reference in the document has been verified. At least one federal judicial district has forbidden almost any use of AI.
The proliferation of faulty references in court filings also points to the most serious problem with the spread of AI bots into our daily lives: They can't be trusted. Long ago it became evident that when even the most sophisticated AI systems are flummoxed by a question or task, they fill in the blanks in their own knowledge by making things up.
Read more: Hiltzik: Artificial intelligence chatbots are spreading fast, but hype about them is spreading faster
As other fields use AI bots to perform important tasks, the consequences can be dire. Many medical patients "can be led astray by hallucinations," a team of Stanford researchers wrote last year. Even the most advanced bots, they found, couldn't back up their medical assertions with solid sources 30% of the time.
It's fair to say that workers in almost any occupation can fall victim to weariness or inattention; but attorneys often deal with disputes with thousands or millions of dollars at stake, and they're expected to be especially rigorous about fact-checking formal submissions.
Some legal experts say there's a legitimate role for AI in the law — even to make decisions customarily left to judges. But lawyers can hardly be unaware of the pitfalls for their own profession in failing to monitor bots' outputs.
The very first sanctions case on Charlotin's list originated in June 2023 — Mata vs. Avianca, a New York personal injury case that resulted in a $5,000 penalty for two lawyers who prepared and submitted a legal brief that was largely the product of the ChatGPT chatbot. The brief cited at least nine court decisions that were soon exposed as nonexistent. The case was widely publicized coast to coast.
One would think fiascos like this would cure lawyers of their reliance on artificial intelligence chatbots to do their work for them. One would be wrong. Charlotin believes that the superficially authentic tone of AI bots' output may encourage overworked or inattentive lawyers to accept bogus citations without double-checking.
"AI is very good at looking good," he told me. Legal citations follow a standardized format, so "they're easy to mimic in fake citations," he says.
It may also be true that the sanctions in the earliest cases, which generally amounted to no more than a few thousand dollars, were insufficient to capture the bar's attention. But Volokh believes the financial consequences of filing bogus citations should pale next to the nonmonetary consequences.
"The main sanctions to each lawyer are the humiliation in front of the judge, in front of the client, in front of supervisors or partners..., possibly in front of opposing counsel, and, if the case hits the news, in front of prospective future clients, other lawyers, etc.," he told me. "Bad for business and bad for the ego."
Charlotin's dataset makes for amusing reading — if mortifying for the lawyers involved. It's peopled by lawyers who appear to be totally oblivious to the technological world they live in.
The lawyer who prepared the hallucinatory ChatGPT filing in the Avianca case, Steven A. Schwartz, later testified that he was "operating under the false perception that this website could not possibly be fabricating cases on its own." When he began to suspect that the cases couldn't be found in legal databases because they were fake, he sought reassurance — from ChatGPT!
Read more: Hiltzik: Excited about AI and self-driving cars? A top roboticist is here to burst your bubble
"Is Varghese a real case?" he texted the bot. Yes, it's "a real case," the bot replied. Schwartz didn't respond to my request for comment.
Other cases underscore the perils of placing one's trust in AI.
For example, last year Keith Ellison, the attorney general of Minnesota, hired Jeff Hancock, a communications professor at Stanford, to provide an expert opinion on the danger of AI-faked material in politics. Ellison was defending a state law that made the distribution of such material in political campaigns a crime; the law was challenged in a lawsuit as an infringement of free speech.
Hancock, a well-respected expert in the social harms of AI-generated deepfakes — photos, videos and recordings that seem to be the real thing but are convincingly fabricated — submitted a declaration that Ellison duly filed in court.
But Hancock's declaration included three hallucinated references apparently generated by ChatGPT, the AI bot he had consulted while writing it. One attributed to bogus authors an article he himself had written, but he didn't catch the mistake until it was pointed out by the plaintiffs.
Laura M. Provinzino, the federal judge in the case, was struck by what she called "the irony" of the episode: "Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI — in a case that revolves around the dangers of AI, no less."
That provoked her to anger. Hancock's fake citations, she wrote, "shatters his credibility with this Court." Noting that he had attested to the veracity of his declaration under penalty of perjury, she threw out his entire expert declaration and refused to allow Ellison to file a corrected version.
In a mea culpa statement to the court, Hancock explained that the errors might have crept into his declaration when he cut-and-pasted a note to himself. But he maintained that the points he made in his declaration were valid nevertheless. He didn't respond to my request for further comment.
On Feb. 6, Michael R. Wilner, a former federal magistrate serving as a special master in a California federal case against State Farm Insurance, hit the two law firms representing the plaintiff with $31,000 in sanctions for submitting a brief with "numerous false, inaccurate, and misleading legal citations and quotations."
In that case, a lawyer had prepared an outline of the brief for the associates assigned to write it. He had used an AI bot to help write the outline, but didn't warn the associates of the bot's role. Consequently, they treated the citations in the outline as genuine and didn't bother to double-check them.
As it happened, Wilner noted,"approximately nine of the 27 legal citations in the ten-page brief were incorrect in some way." He chose not to sanction the individual lawyers: "This was a collective debacle," he wrote.
Wilner added that when he read the brief, the citations almost persuaded him that the plaintiff's case was sound — until he looked up the cases and discovered they were bogus. "That's scary," he wrote. His monetary sanction for misusing AI appears to be the largest in a U.S. court ... so far.
Get the latest from Michael HiltzikCommentary on economics and more from a Pulitzer Prize winner.Sign me up.
This story originally appeared in Los Angeles Times.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
39 minutes ago
- Yahoo
Prediction: This Hot Artificial Intelligence (AI) Semiconductor Stock Will Skyrocket After June 25
Micron Technology stock has been in red-hot form on the stock market over the past couple of months, and its upcoming quarterly report on June 25 could give it another boost. Micron is on track to deliver outstanding growth in its revenue and earnings, driven by the terrific demand for the company's high-bandwidth memory chips. The stock's attractive valuation makes it a no-brainer buy going into its earnings report. 10 stocks we like better than Micron Technology › Micron Technology (NASDAQ: MU) stock has made a sharp move higher over the past couple of months -- gaining an impressive 37% as of this writing -- driven by the broader recovery in technology stocks. And it won't be surprising to see this semiconductor stock getting a big shot in the arm when it releases its fiscal 2025 third-quarter results after the market closes on June 25. Micron is heading into its quarterly report with a major catalyst in the form of artificial intelligence (AI) on its side, which could allow the company to deliver better-than-expected numbers and guidance and send its stock even higher. Let's look at the reasons why that may be the case. Micron's fiscal Q3 guidance calls for $8.8 billion in revenue at the midpoint of its guidance range. That would be a massive increase over the year-ago period's revenue of $6.8 billion. Meanwhile, the company's adjusted earnings are forecast to jump by just over 2.5 times on a year-over-year basis. Investors, however, shouldn't forget that the booming demand for high-bandwidth memory (HBM) that goes into AI graphics processing units (GPUs) manufactured by the likes of Nvidia and AMD could allow Micron to exceed its guidance. Micron's HBM has been selected for powering Nvidia's GB200 and GB300 Blackwell systems, and the good news is that the latter reported solid numbers recently. Nvidia's data center revenue shot up 73% year over year to $39 billion in the first quarter of fiscal 2026, with the Blackwell AI GPUs accounting for 70% of the segment's revenue. Nvidia pointed out that it has almost completed its transition from the previous-generation Hopper platform to GPUs based on the latest Blackwell architecture. What's worth noting here is that the company's Blackwell GPUs are equipped with larger HBM chips to enable higher bandwidth and data transmission. Specifically, Nvidia's Hopper H200 GPU was equipped with 141 gigabytes (GB) of HBM. That has been upgraded to 192 GB on Nvidia's B200 Blackwell processor, while the more powerful B300 packs a whopping 288 GB of HBM3e memory. Micron management remarked on the company's March earnings conference call that it started volume shipments of HBM3e memory to its third large customer, suggesting that it could indeed be supplying memory chips for Nvidia's latest generation processors. Importantly, the terrific demand for HBM has created a favorable pricing scenario for the likes of Micron. The company is reportedly looking to hike the price of its HBM chips by 11% this year. It has sold out its entire HBM capacity for 2025 and is negotiating contracts for next year, and it won't be surprising to see customers paying more for HBM considering its scarcity. This combination of higher HBM volumes and the potential increase in price explains why Micron's top and bottom lines are set to witness remarkable growth when it releases its earnings later this month. Additionally, even more chipmakers are set to integrate HBM into their AI accelerators. Broadcom and Marvell Technology, which are known for designing custom AI processors for major cloud computing companies, have recently developed architectures supporting the integration of HBM into their platforms. So, Marvell's addressable market is likely to get bigger thanks to AI, setting the stage for a potential acceleration in the company's growth. Micron stock has rallied impressively in the past couple of months. The good part is that the company is still trading at just 23 times earnings despite this surge. The forward earnings multiple of 9 is even more attractive, indicating that Micron's earnings growth is set to take off. Consensus estimates are projecting a whopping 437% increase in Micron's earnings this year, followed by another solid jump of 57% in the next fiscal year. All this indicates why the stock's median 12-month price target of $130 points toward a 27% jump from current levels. However, this AI stock could do much better than that on account of the phenomenal earnings growth that it is projected to clock, which is why investors can consider buying it hand over fist before its June 25 report that could supercharge its recent rally. Before you buy stock in Micron Technology, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Micron Technology wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $669,517!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $868,615!* Now, it's worth noting Stock Advisor's total average return is 792% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices and Nvidia. The Motley Fool recommends Broadcom and Marvell Technology. The Motley Fool has a disclosure policy. Prediction: This Hot Artificial Intelligence (AI) Semiconductor Stock Will Skyrocket After June 25 was originally published by The Motley Fool
Yahoo
an hour ago
- Yahoo
Billionaire Stanley Druckenmiller Has Unloaded Shares of Last Year's 2 Top Performing AI Stocks and Is Piling Into a Growth Stock That Has Climbed 150% in 3 Years
Druckenmiller has a stunning investment track record, regularly delivering double-digit percentage annual returns with no money-losing years. In recent times, the billionaire has benefited from investments in some of today's top growth stocks. 10 stocks we like better than Eli Lilly › You don't have to be a billionaire to invest like one -- and reap the rewards. Any of us can look at the moves made by the world's most successful investors and follow those that also fit into our investment strategy. Billionaire fund managers have proven their strengths as investors over time, making them excellent guides for novices on the wealth-building path. With that in mind, let's consider the recent moves made by Stanley Druckenmiller, founder of the Duquesne Family Office. Over the past few quarters, he closed his position in last year's two top-performing S&P 500 and Dow Jones Industrial Average artificial intelligence (AI) stocks, and just recently, he increased his position in a growth stock that has soared by 150% over the past three years. Could taking those cues be a smart move for you? So, first, a bit about why Druckenmiller is a billionaire worth watching and potentially following. He founded Duquesne Capital Management in the early 1980s and ran the fund for 30 years. Over that time period, he delivered a truly remarkable annualized average return of 30% -- and importantly, never had a money-losing year. Since then, he has shifted his focus to his family office, where he invests in stocks across industries and oversees $3 billion in securities. Now, let's consider his recent moves, starting with a major one. In the third quarter, Druckenmiller closed out his position in AI chip market leader Nvidia (NASDAQ: NVDA), a company that climbed a bit further from there, becoming the Dow Jones Industrial Average component that delivered the biggest annual gain of 2024. The investor originally bought Nvidia shares in the fourth quarter of 2022, and from the end of that quarter through the end of last year's third quarter, they climbed by more than 700%. So this clearly was a winning investment for Druckenmiller, though in a Bloomberg interview, he expressed regret about the timing of the sale, and said he would consider buying Nvidia stock again at the right price. Druckenmiller's second big sale came in the first quarter of this year: He closed out his position in Palantir Technologies (NASDAQ: PLTR), a stock he bought a year earlier. Last year, the AI software company generated the biggest gain in the S&P 500, climbing by 340%. Meanwhile, Druckenmiller increased his position in another growth stock: pharma giant Eli Lilly (NYSE: LLY), which in recent quarters has delivered double-digit percentage revenue gains. That growth came largely thanks to its position in a high-growth market with solid long-term potential: weight loss drugs. Druckenmiller increased his Lilly holding by 52% in the first quarter and now owns 94,830 shares worth about $73 million as of the close of trading Friday. Lilly represents nearly 2.6% of the billionaire's portfolio, up from about 1.3% in the previous quarter. He initially bought the stock in 2024's fourth quarter, so he clearly is building a position in it, and sees opportunity for growth ahead. Lilly sells a wide variety of medicines, but investors have focused on its GLP-1 agonist tirzepatide in recent quarters -- and for good reason. Tirzepatide is sold under the name Mounjaro for type 2 diabetes and Zepbound for weight loss, and both drugs are generating blockbuster revenues. Doctors have prescribed either one for weight loss, and demand has been so high that last year, both were on the Food and Drug Administration's shortage list. These drugs should continue to generate sales growth due to ongoing high demand; analysts at Goldman Sachs forecast that the weight loss drug market will be about $95 billion annually by 2030. One more element may supercharge Lilly's growth in this market. Mounjaro, Zepbound, and the competing GLP-1 drugs on the market today all must be administered via injection, but Lilly has been developing a new weight loss candidate in pill form. Phase 3 trials have produced strong data for the pill, dubbed orforglipron, and Lilly says it will apply for regulatory review in the weight loss indication by the end of this year. The convenience of a weight loss pill that is as effective as the injectables could help orforglipron (and Lilly) take a leading share of the market in the years to come. So the drugmaker's growth may be far from over. Should you follow Druckenmiller's lead, exit Nvidia and Palantir (if you own them), and get in on Lilly shares? All three of these companies are leaders in exciting growth markets and could gain over the long term, so your decisions should depend on your investment strategy. If you're a cautious investor and like passive income, Eli Lilly is a great choice to own because, while it does offer share price growth potential, it also offers the safety and steady dividends of an established pharma player. If you're an aggressive investor, though, you may want to favor the AI story and continue along with Nvidia and/or Palantir over the long term. Before you buy stock in Eli Lilly, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Eli Lilly wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $669,517!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $868,615!* Now, it's worth noting Stock Advisor's total average return is 792% — a market-crushing outperformance compared to 171% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 2, 2025 Adria Cimino has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia and Palantir Technologies. The Motley Fool has a disclosure policy. Billionaire Stanley Druckenmiller Has Unloaded Shares of Last Year's 2 Top Performing AI Stocks and Is Piling Into a Growth Stock That Has Climbed 150% in 3 Years was originally published by The Motley Fool

Yahoo
an hour ago
- Yahoo
Lawyers could face ‘severe' penalties for fake AI-generated citations, UK court warns
The High Court of England and Wales says lawyers need to take stronger steps to prevent the misuse of artificial intelligence in their work. In a ruling tying together two recent cases, Judge Victoria Sharp wrote that generative AI tools like ChatGPT 'are not capable of conducting reliable legal research." 'Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect,' Judge Sharp wrote. 'The responses may make confident assertions that are simply untrue.' That doesn't mean lawyers cannot use AI in their research, but she said they have a professional duty 'to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work.' Judge Sharp suggested that the growing number of cases where lawyers (including, on the U.S. side, lawyers representing major AI platforms) have cited what appear to be AI-generated falsehoods suggests that 'more needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court,' and she said her ruling will be forwarded to professional bodies including the Bar Council and the Law Society. In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations — 18 of those cases did not exist, while many others 'did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application,' Judge Sharp said. In the other, a lawyer representing a man who had been evicted from his London home wrote a court filing citing five cases that did not appear to exist. (The lawyer denied using AI, though she said the citations may have come from AI-generated summaries that appeared in 'Google or Safari.') Judge Sharp said that while the court decided not to initiate contempt proceedings, that is 'not a precedent.' 'Lawyers who do not comply with their professional obligations in this respect risk severe sanction,' she added. Both lawyers were either referred or referred themselves to professional regulators. Judge Sharp noted that when lawyers do not meet their duties to the court, the court's powers range from 'public admonition' to the imposition of costs, contempt proceedings, or even 'referral to the police.' This article originally appeared on TechCrunch at Sign in to access your portfolio