logo
#

Latest news with #AndrewPerlman

Why do lawyers keep using ChatGPT?
Why do lawyers keep using ChatGPT?

The Verge

time4 days ago

  • Business
  • The Verge

Why do lawyers keep using ChatGPT?

Every few weeks, it seems like there's a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, 'bogus AI-generated research.' The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don't exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven't they stopped? The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren't necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don't understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a 'super search engine.' It took submitting a filing with fake citations to reveal that it's more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense. Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. 'I think that what we're seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn't mean that these tools don't have enormous possible benefits and use cases for the delivery of legal services,' Perlman said. Legal databases and research systems like Westlaw are incorporating AI services. In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they've used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research 'case law, statutes, forms or sample language for orders.' The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said 'exploring the potential for implementing AI' at work is their highest priority. 'The role of a good lawyer is as a 'trusted advisor' not as a producer of documents,' one respondent said. But as plenty of recent examples have shown, the documents produced by AI aren't always accurate, and in some cases aren't real at all. In one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included 'significant misrepresentations and misquotations of supposedly pertinent case law and history,' Judge Kathryn Kimball Mizelle, of Florida's middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times. Mizelle ultimately let Burke's lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he 'assumes sole and exclusive responsibility for these errors.' Rasch said he used the 'deep research' feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw's AI feature. Rasch isn't alone. Lawyers representing Anthropic recently admitted to using the company's Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an 'inaccurate title and inaccurate authors.' Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock's filing included 'two citation errors, popularly referred to as 'hallucinations,'' and incorrectly listed authors for another citation. These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. 'I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn't exist,' Judge Michael Wilner wrote. Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. 'I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers' judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,' Perlman said. But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. 'Even before the emergence of generative AI, lawyers would file documents with citations that didn't really address the issue that they claimed to be addressing,' Perlman said. 'It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don't properly check them; they don't really see if the case has been overturned or overruled.' (That said, the cases do at least typically exist.) Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. 'I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,' Perlman said. Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He's also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the 'baseline definition' of what deepfakes are and then 'I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,' Kolodin told The Guardian at the time. Kolodin said he 'may have' discussed his use of ChatGPT with the bill's main Democratic cosponsor but otherwise wanted it to be 'an Easter egg' in the bill. The bill passed into law. Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they're real. 'You don't just typically send out a junior associate's work product without checking the citations,' said Kolodin. 'It's not just machines that hallucinate; a junior associate could read the case wrong, it doesn't really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.' Kolodin said he uses both ChatGPT's pro 'deep research' tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has 'gone down substantially over the past year.' AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys' use of LLMs and other AI tools. Lawyers who use AI tools 'have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature' of generative AI, the opinion reads. The guidance advises lawyers to 'acquire a general understanding of the benefits and risks of the GAI tools' they use — or, in other words, to not assume that an LLM is a 'super search engine.' Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states. Perlman is bullish on lawyers' use of AI. 'I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,' he said. 'I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don't.' Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. 'Even with recent advances,' Wilner wrote, 'no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.'

Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents
Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents

Yahoo

time21-02-2025

  • Business
  • Yahoo

Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents

The law firm Morgan & Morgan has rushed out a stern email to its attorneys after two of them were caught citing fake court cases invented by an AI model, Reuters reports. Sent earlier this month to all of its over 1,000 lawyers, the email warns at length about the tech's proclivity for hallucinating. But the pros of the tech, apparently, still outweigh the cons; rather than banning AI usage — something that plenty of organizations have done — Morgan & Morgan leadership take the middle road and give the usual spiel about please double-checking your work to ensure it's not totally made-up nonsense. "As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified," the email reads. "The integrity of your legal work and reputation depend on it." Last week, a federal judge in Wyoming admonished two Morgan & Morgan lawyers for citing at least nine instance of fake case law in court filings submitted in January. Threatened with sanctions, the embarrassed lawyers blamed an "internal AI tool" for the mishap, and pleaded the judge for mercy. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI in legal work, told Reuters. The judge hasn't decided whether he'll punish the lawyers yet, per Reuters. Nonetheless, it's an enormous embarrassment for the relatively well-known firm, especially given the occasion. The lawsuit in question is against the world's largest company, Walmart, alleging that a hoverboard that the retailer sold was responsible for a fire that burned down the plaintiff's home. Now, the corporate lawyers are probably cackling to themselves in a backroom somewhere, their opponents having shot themselves in the foot so spectacularly. Anyone familiar with the shortcomings inherent to large language models could've seen something like this happening from a mile away. And according to Reuters, the tech's dubious usage in legal settings has already led to lawyers being questioned or disciplined in at least seven cases in the past two years. It's not just the hallucinations that are so pernicious — it's how authoritatively the AI models lie to you. That, and the fact that anything that promises to automate a task more often than not tends to induce the person using it to let their guard down, a problem that's become pretty apparent in self-driving cars, for example, or in news agencies that have experimented with using AI to summarize stories or to assist with reporting. And so organizations can tell their employees to double-check their work all they want, but the fact remains that screw-ups like these will keep happening. To address the issue, Morgan & Morgan are requiring attorneys to acknowledge they're aware about the risks associated with AI usage by clicking a little box added to its AI tool, The Register reported. We're sure that'll do the trick. More on AI: New York Times Encourages Staff to Create Headlines Using AI

AI 'hallucinations' in court papers spell trouble for lawyers
AI 'hallucinations' in court papers spell trouble for lawyers

Yahoo

time18-02-2025

  • Business
  • Yahoo

AI 'hallucinations' in court papers spell trouble for lawyers

By Sara Merken (Reuters) - U.S. personal injury law firm Morgan & Morgan sent an urgent email this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired. A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart. One of the lawyers admitted in court filings last week that he used an AI program that "hallucinated" the cases and apologized for what he called an inadvertent mistake. See for yourself — The Yodel is the go-to source for daily news, entertainment and feel-good stories. By signing up, you agree to our Terms and Privacy Policy. AI's penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, Reuters found. The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk. A Morgan & Morgan spokesperson did not respond to a request for comment. Walmart declined to comment. The judge has not yet ruled whether to discipline the lawyers in the Walmart case, which involved an allegedly defective hoverboard toy. Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. Sixty-three percent of lawyers surveyed by Reuters' parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly. Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said. AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets. Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to "even an unintentional misstatement" produced through AI. The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI to enhance legal work. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Perlman said. 'LACK OF AI LITERACY' In one of the earliest court rebukes over attorneys' use of AI, a federal judge in Manhattan in June 2023 fined two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline. A different New York federal judge last year considered imposing sanctions in a case involving Michael Cohen, the former lawyer and fixer for Donald Trump, who said he mistakenly gave his own attorney fake case citations that the attorney submitted in Cohen's criminal tax and campaign finance case. Cohen, who used Google's AI chatbot Bard, and his lawyer were not sanctioned, but the judge called the episode "embarrassing." In November, a Texas federal judge ordered a lawyer who cited nonexistent cases and quotations in a wrongful termination lawsuit to pay a $2,000 penalty and attend a course about generative AI in the legal field. A federal judge in Minnesota last month said a misinformation expert had destroyed his credibility with the court after he admitted to unintentionally citing fake, AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris. Harry Surden, a law professor at the University of Colorado's law school who studies AI and the law, said he recommends lawyers spend time learning "the strengths and weaknesses of the tools." He said the mounting examples show a "lack of AI literacy" in the profession, but the technology itself is not the problem. "Lawyers have always made mistakes in their filings before AI," he said. "This is not new."

AI 'hallucinations' in court papers spell trouble for lawyers
AI 'hallucinations' in court papers spell trouble for lawyers

Reuters

time18-02-2025

  • Business
  • Reuters

AI 'hallucinations' in court papers spell trouble for lawyers

Feb 18 (Reuters) - U.S. personal injury law firm Morgan & Morgan sent an urgent email, opens new tab this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired. A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart (WMT.N), opens new tab. One of the lawyers admitted in court filings last week that he used an AI program that "hallucinated" the cases and apologized for what he called an inadvertent mistake. AI's penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, Reuters found. The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk. A Morgan & Morgan spokesperson did not respond to a request for comment. Walmart declined to comment. The judge has not yet ruled whether to discipline the lawyers in the Walmart case, which involved an allegedly defective hoverboard toy. Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. Sixty-three percent of lawyers, opens new tab surveyed by Reuters' parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly. Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said. AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets. Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to "even an unintentional misstatement" produced through AI. The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University's law school and an advocate of using AI to enhance legal work. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple," Perlman said. 'LACK OF AI LITERACY' In one of the earliest court rebukes over attorneys' use of AI, a federal judge in Manhattan in June 2023 fined two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline. A different New York federal judge last year considered imposing sanctions in a case involving Michael Cohen, the former lawyer and fixer for Donald Trump, who said he mistakenly gave his own attorney fake case citations that the attorney submitted in Cohen's criminal tax and campaign finance case. Cohen, who used Google's AI chatbot Bard, and his lawyer were not sanctioned, but the judge called the episode "embarrassing." In November, a Texas federal judge ordered a lawyer who cited nonexistent cases and quotations in a wrongful termination lawsuit to pay a $2,000 penalty and attend a course about generative AI in the legal field. A federal judge in Minnesota last month said a misinformation expert had destroyed his credibility with the court after he admitted to unintentionally citing fake, AI-generated citations in a case involving a "deepfake" parody of Vice President Kamala Harris. Harry Surden, a law professor at the University of Colorado's law school who studies AI and the law, said he recommends lawyers spend time learning "the strengths and weaknesses of the tools." He said the mounting examples show a "lack of AI literacy" in the profession, but the technology itself is not the problem. "Lawyers have always made mistakes in their filings before AI," he said. "This is not new."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store