logo
#

Latest news with #EllisGeorge

AI hallucinations in court documents are a growing problem, and data shows lawyers are responsible for many of the errors
AI hallucinations in court documents are a growing problem, and data shows lawyers are responsible for many of the errors

Yahoo

time27-05-2025

  • Yahoo

AI hallucinations in court documents are a growing problem, and data shows lawyers are responsible for many of the errors

Since May 1, judges have called out at least 23 examples of AI hallucinations in court records. Legal researcher Damien Charlotin's data shows fake citations have grown more common since 2023. Most cases are from the US, and increasingly, the mistakes are made by lawyers, not laypeople. Judges are catching fake legal citations more frequently, and it's increasingly the fault of lawyers over-relying on AI, new data shows. Damien Charlotin, a legal data analyst and consultant, created a public database of 120 cases in which courts found that AI hallucinated quotes, created fake cases, or cited other apparent legal authorities that didn't exist. Other cases in which AI hallucinates might not draw a judge's attention, so that number is a floor, not a ceiling. While most mistakes were made by people struggling to represent themselves in court, data shows that lawyers and other professionals working with them, like paralegals, are increasingly at fault. In 2023, seven out of 10 cases in which hallucinations were caught were made by so-called pro se litigants, and three were the fault of lawyers; last month, legal professionals were found to be at fault in at least 13 of 23 cases where AI errors were found. "Cases of lawyers or litigants that have mistakenly cited hallucinated cases has now become a rather common trope," Charlotin wrote on his website. The database includes 10 rulings from 2023, 37 from 2024, and 73 from the first five months of 2025, most of them from the US. Other countries where judges have caught AI mistakes include the UK, South Africa, Israel, Australia, and Spain. Courts around the world have also gotten comfortable punishing AI misuse with monetary fines, imposing sanctions of $10,000 or more in five cases, four of them this year. In many cases, the offending individuals don't have the resources or know-how for sophisticated legal research, which often requires analyzing many cases citing the same laws to see how they have been interpreted in the past. One South African court said an "elderly" lawyer involved in the use of fake AI citations seemed "technologically challenged." In recent months, attorneys in high-profile cases working with top US law firms have been caught using AI. Lawyers at the firms K&L Gates and Ellis George recently admitted that they relied partly on made-up cases because of a miscommunication among lawyers working on the case and a failure to check their work, resulting in a sanction of about $31,000. In many of the cases in Charlotin's database, the specific AI website or software used wasn't mentioned. In some cases, judges concluded that AI had been used despite denials by the parties involved. However, in cases where a specific tool was mentioned, ChatGPT is mentioned by name in Charlotin's data more than any other. Charlotin didn't immediately respond to a request for comment. Read the original article on Business Insider

AI hallucinations in court documents are a growing problem, and data shows lawyers are responsible for many of the errors
AI hallucinations in court documents are a growing problem, and data shows lawyers are responsible for many of the errors

Business Insider

time27-05-2025

  • Business Insider

AI hallucinations in court documents are a growing problem, and data shows lawyers are responsible for many of the errors

Judges are catching fake legal citations more frequently, and it's increasingly the fault of lawyers over-relying on AI, new data shows. Damien Charlotin, a legal data analyst and consultant, created a public database of 120 cases in which courts found that AI hallucinated quotes, created fake cases, or cited other apparent legal authorities that didn't exist. Other cases in which AI hallucinates might not draw a judge's attention, so that number is a floor, not a ceiling. While most mistakes were made by people struggling to represent themselves in court, data shows that lawyers and other professionals working with them, like paralegals, are increasingly at fault. In 2023, seven out of 10 cases in which hallucinations were caught were made by so-called pro se litigants, and three were the fault of lawyers; last month, legal professionals were found to be at fault in at least 13 of 23 cases where AI errors were found. "Cases of lawyers or litigants that have mistakenly cited hallucinated cases has now become a rather common trope," Charlotin wrote on his website. The database includes 10 rulings from 2023, 37 from 2024, and 73 from the first five months of 2025, most of them from the US. Other countries where judges have caught AI mistakes include the UK, South Africa, Israel, Australia, and Spain. Courts around the world have also gotten comfortable punishing AI misuse with monetary fines, imposing sanctions of $10,000 or more in five cases, four of them this year. In many cases, the offending individuals don't have the resources or know-how for sophisticated legal research, which often requires analyzing many cases citing the same laws to see how they have been interpreted in the past. One South African court said an "elderly" lawyer involved in the use of fake AI citations seemed "technologically challenged." In recent months, attorneys in high-profile cases working with top US law firms have been caught using AI. Lawyers at the firms K&L Gates and Ellis George recently admitted that they relied partly on made-up cases because of a miscommunication among lawyers working on the case and a failure to check their work, resulting in a sanction of about $31,000. In many of the cases in Charlotin's database, the specific AI website or software used wasn't mentioned. In some cases, judges concluded that AI had been used despite denials by the parties involved. However, in cases where a specific tool was mentioned, ChatGPT is mentioned by name in Charlotin's data more than any other.

Law Firms Caught and Punished for Passing Around "Bogus" AI Slop in Court
Law Firms Caught and Punished for Passing Around "Bogus" AI Slop in Court

Yahoo

time15-05-2025

  • Business
  • Yahoo

Law Firms Caught and Punished for Passing Around "Bogus" AI Slop in Court

A California judge fined two law firms $31,000 after discovering that they'd included AI slop in a legal brief — the latest instance in a growing tide of avoidable legal drama wrought by lawyers using generative AI to do their work without any due diligence. As The Verge reported this week, the court filing in question was a brief for a civil lawsuit against the insurance giant State Farm. After its submission, a review of the brief found that it contained "bogus AI-generated research" that led to the inclusion of "numerous false, inaccurate, and misleading legal citations and quotations," as judge Michael Wilner wrote in a scathing ruling. According to the ruling, it was only after the judge requested more information about the error-riddled brief that lawyers at the firms involved fessed up to using generative AI. And if he hadn't caught onto it, Milner cautioned, the AI slop could have made its way into an official judicial order. "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them — only to find that they didn't exist," Milner wrote in his ruling. "That's scary." "It almost led to the scarier outcome (from my perspective)," he added, "of including those bogus materials in a judicial order." A lawyer at one of the firms involved with the ten-page brief, the Ellis George group, used Google's Gemini and a few other law-specific AI tools to draft an initial outline. That outline included many errors, but was passed along to the next law firm, K&L Gates, without any corrections. Incredibly, the second firm also failed to notice and correct the fabrications. "No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief," Milner wrote in the ruling. After the brief was submitted, a judicial review found that a staggering nine out of 27 legal citations included in the filing "were incorrect in some way," and "at least two of the authorities cited do not exist." Milner also found that quotes "attributed to the cited judicial opinions were phony and did not accurately represent those materials." As for his decision to levy the hefty fines, Milner said the egregiousness of the failures, coupled with how compelling the AI's made-up responses were, necessitated "strong deterrence." "Strong deterrence is needed," wrote Milner, "to make sure that lawyers don't respond to this easy shortcut." More on lawyers and AI: Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store