logo
#

Latest news with #CoCounsel

Inside the AI boom that's changing how Big Law attorneys work
Inside the AI boom that's changing how Big Law attorneys work

Business Insider

time18-07-2025

  • Business
  • Business Insider

Inside the AI boom that's changing how Big Law attorneys work

DLA Piper rolls out Microsoft Copilot firmwide Assess: DLA Piper has defended Microsoft in a defamation suit over AI-generated content and helped OpenAI put forward its views to Congress on how AI should be regulated. It's leaning into the tech internally, too. Danny Tobey, chair of DLA Piper's AI and data analytics practice, said the firm has an internal group of lawyers and technologists who test tools and develop metrics for quality and accuracy. The team runs A/B tests on real cases, comparing results from traditional legal teams against AI-assisted ones to evaluate performance across speed, accuracy, and cost. Apply: Microsoft has highlighted DLA Piper as the first major law firm to adopt Copilot firmwide, after starting with several hundred licenses in late 2023. Lawyers use Copilot within their existing Microsoft 365 apps, Tobey said. Think drafting documents, poring over spreadsheets, and creating PowerPoint slides. For more advanced legal research and analysis, he said, attorneys turn to legal-specific tools like Harvey, CoCounsel, and LexisNexis Protégé. DLA Piper has also developed custom language models to help clients spot compliance risks early, including under laws like the Foreign Corrupt Practices Act and the Anti-Kickback Statute. "We've found a number of issues before they metastasized into outright violations," Tobey said, "and that allowed the company to step in and do some education and compliance refreshing before there was a problem." Align: Tobey said the firm provides detailed training for lawyers on how to use its tools. "We train on a per-tool basis because they all have strengths and weaknesses," Tobey said. "If you were a doctor, you would not adopt a new tool without being trained in its limitations." Gibson Dunn pilots ChatGPT Enterprise with its lawyers and staff Assess: Before adopting any tool, Gibson Dunn runs a three-step review process, said Meredith Williams-Range, the firm's chief legal operations officer. Tools must first pass an internal audit covering security, privacy, and risk. Next, they undergo proof-of-concept testing with a small group. Finally, tools must demonstrate real value to lawyers through hands-on use, a process that can take days or, as with a Harvey pilot, stretch over several months. Apply: ChatGPT Enterprise is one tool making its way through Gibson Dunn's internal processes. In June, the firm launched a pilot with more than 500 participants — a mix of lawyers and staff — to put the product through its paces. Williams-Range said she emailed practice group leaders and managing partners around the world, asking them to submit lawyers willing to test the tool. Three days later, 450 people had signed up — more than twice what she expected. Gibson Dunn says it's also evaluating using rival AI models Google Gemini and Claude Enterprise. The firm works with a range of vendors, including Harvey, Thomson Reuters, and Microsoft. Some tools, like Harvey and CoCounsel, are used to support legal work, while Copilot helps with administrative tasks. For more specific use cases, the firm collaborates with developers to build custom workflows tailored to its practices and data, Williams-Range said. Align: The firm's AI policy is reviewed quarterly to stay current with changing regulations, she said. It also includes a procurement playbook with specific terms around security and how it shares learnings about the tools. Gibson Dunn also has a strategic advisory board made up of over 30 partners across offices globally. This brain trust meets monthly to guide policy decisions, debate use cases, and determine whether tools like ChatGPT Enterprise should be limited, expanded, or customized. "Just because we can doesn't mean we should," Williams-Range said, referring to the principle that guides the board's work. Sidley Austin hones prompt engineering skills during associate orientation Assess: Over her 29 years with the firm, corporate lawyer Sharon Flanagan has watched Sidley embrace new tech, but with guardrails in place. The firm formed an AI council with members from its management committee, executive committee, and strategy team to set policies and identify use cases. Sidley typically starts with small-scale rollouts to pilot new tools before expanding. Apply: Sidley has explored a range of AI tools, says Jane Rheem, Sidley's chief data and AI officer — from legal-specific platforms, to broader foundation models, to point solutions that help with timekeeping or narrative writing. The firm declined to identify the AI tools it's testing, saying it doesn't want to endorse products that may not be part of its long-term strategy. Flanagan says uptake has been organic among litigators and corporate and regulatory attorneys. Align: Implementation is only the beginning, Rheem says. The firm tracks usage after deployment, gathering data and feedback from "superusers" — early adopters who experiment broadly and flag where tools are working (or not). Sidley is also focused on making sure its youngest lawyers are fluent in the tools. This year, nearly 300 incoming associates participated in a generative AI hackathon as part of their orientation. Ropes & Gray uses AI tools like Harvey and Hebbia to squeeze more hours out of the day Assess: When Ropes & Gray finds an AI service it likes, Ed Black and the IT and practice technology teams put on their investment banker hats. "We phone them up every few weeks and say, 'Tell us about your updates,'" said Black, the firm's technology strategy leader. Before a tool can move to testing, it must pass a security and risk audit; only "qualified vendors" make it to the next phase. From there, testing is twofold. First, a technical evaluation by the firm's technology team aims to ensure the product works as promised. Then a second round with lawyers examines usability and actual value in practice. Apply: Ropes & Gray rolled out Harvey firmwide in June, after a year of use with a smaller test group, Black said. The firm has also collaborated with Harvey on a "workflow builder" that lets users design and deploy custom agents — software that can carry out tasks on its own. Hebbia, an AI agent company focused on professional services, has proven particularly useful to lawyers like Melissa Bender, a partner in the asset management group and cohead of the private funds practice. When institutional investors need fund documents reviewed, Bender uses Hebbia to extract key terms and speed up summaries. She estimates the process now takes two to three hours, less than what would typically be a 10-hour matter. Align: Black stresses responsible use of the tools, starting with the principle that the results of using these tools are first drafts, not the final product. The private funds practice requires tool-specific training for junior and mid-level associates, Bender says, while more senior lawyers are "strongly encouraged" to take the training. The goal is to ensure lawyers know how to use the tools appropriately and empower them to speak with clients about the firm's technology capabilities. "We are in the business of selling legal services," Bender said. "I want our associates to understand the differentiated nature of our offering." Morgan Lewis requires staff to get credentialed before they can use the tools Assess: At Morgan Lewis, the first step in adopting AI isn't picking the tool. It's diagnosing the problem, said attorney Timothy Levin, who leads the firm's investment management practice. Understanding how legal work can be improved with AI is important to ensure tools are applied where they can have a real impact, rather than just throwing tech at a problem, Levin said. Once a tool passes security and risk checks, it's piloted by an attorney and C-suite advisory group spanning 15 practice areas and firm operations — a cross-section designed to vet the tool's value across the firm's legal work. Apply: Morgan Lewis has been inundated with startup pitches, says Colleen Nihill, its chief AI and knowledge officer, as the legal tech gold rush draws a wave of new founders. To cut through the noise, Morgan Lewis favors larger enterprise partners that align with its technical standards. For example, Thomson Reuters is a strategic partner. The firm's advisory group meets regularly with Thomson Reuters to review existing tools, preview the product road map, and beta test unreleased features. They also collaborate to co-develop tools tailored to Morgan Lewis's needs. One use case at Morgan Lewis involves reviewing fund documents for institutional investors, where CoCounsel Core helps attorneys summarize key terms and flag client-specific dealbreakers. Align: Nihill said the firm requires its staff to get credentialed for tools before they can use them. Partners and firm leadership were the first to get CoCounsel Core-certified, a process that included Coursera-based coursework, hands-on exercises, and a final assessment. Once certified, users receive a digital badge displayed on their internal profiles. Nihill says this signals to associates that these tools aren't just approved; they're a professional priority for the firm.

Justice in the Age of Artificial Intelligence: Who Rules, Man or Machine? 1-2
Justice in the Age of Artificial Intelligence: Who Rules, Man or Machine? 1-2

Ammon

time06-07-2025

  • Ammon

Justice in the Age of Artificial Intelligence: Who Rules, Man or Machine? 1-2

In a landmark ruling issued by the U.S. District Court for the Southern District of Florida on May 2025, a real-world example error of AI hallucination. The story began when a visiting attorney from California, alongside a local Florida lawyer, submitted a legal brief citing a judicial decision from the Delaware Court. But the shock came when the court discovered that the cited ruling never existed. It was fabricated by an AI tool used for legal research. This alarm highlight an existential challenge that threatens the core principles of justice and accountability within the legal system. Which is the necessity for fairness, integrity, and infallibility in justice? Generative artificial intelligence becomes a formidable, disruptive force within courtrooms, law offices, and legislative bodies. Today, AI actively participates in drafting contracts, generating complex legal arguments, editing judicial decisions, and even producing preliminary legislative texts. This is not simply a technical breakthrough; it is a paradigm shift that redefines the very structure of our legal systems. Recent studies reveal that AI-generated legal texts now span a vast array of critical documents—from commercial contracts and legal pleadings to judicial rulings, legislative proposals, legal complaints, and even preliminary police reports. These are produced using advanced tools like ChatGPT, Westlaw, and CoCounsel, creating a new legal reality where algorithmic power converges with human intention. Law firms increasingly rely on AI to rapidly produce draft contracts, while courts use it to analyze case patterns and predict outcomes. Some legislatures have even begun accepting draft bills generated by AI tools, subject only to final human review. This dependency raising critical questions of responsibility, review, and consequences. Who is accountable for errors? Who verifies the content? Who bears the legal implications of a misstep? Amid this enthusiasm for technological progress, deeper challenges emerge—ones that extend far beyond technical concerns to strike at the heart of ethical and philosophical questions about justice itself. Foremost among these challenges are the dangers of hallucinations and bias. As clearly demonstrated by the Florida case, AI tools, despite their computational power, can generate fictitious citations and entirely false legal precedents. This is not a minor technical glitch—it undermines the foundation of legal fairness and equality before the law. Bias embedded in training data may skew legal analysis, raising profound philosophical concerns about how justice can be achieved when the algorithm's foundation is inherently flawed. A second looming threat is the phenomenon of legal floodgates. The ease with which AI can generate vast volumes of legal text may lead to an overwhelming influx of redundant or unnecessary documents. Courts may become buried under irrelevant data, straining judicial efficiency and potentially damaging public trust in the legal system. The justice process may become clogged with voluminous yet valueless content, diluting the importance of professional legal judgment and undermining procedural clarity. A third and equally troubling issue is that of authenticity and authorship. Here arises a fundamental question that strikes the moral fabric of the legal profession: Who truly authored a given legal text? Does a document reflect the considered intention of an attorney or the deliberation of a judge—or is it merely the product of an algorithm, devoid of human intent or ethical responsibility? This issue plunges us into the domain of moral philosophy and legal theory, where the 'original intent' behind a legal document is paramount. When human authorship is obscured, the chain of accountability becomes dangerously unclear, threatening the legal system's foundational principles. Legal institutions across the globe vary in how they approach these transformations, exposing a troubling regulatory gap. Some courts—particularly in the United States, as illustrated by the Florida decision—now explicitly prohibit the submission of AI-generated legal briefs or rulings unless thoroughly reviewed by a human. Meanwhile, other jurisdictions require only clear disclosure of AI usage and mandate human review prior to official submission. This divergence reveals the lack of a unified regulatory framework to govern such technologies. On the other side of the equation, tech companies have initiated voluntary self-regulation, embedding safeguards to limit AI output in sensitive legal contexts. While such efforts are commendable, they lack legal enforcement and are largely driven by internal ethics and market realities. This reveals the limitations of self-regulation and underscores the urgent need for external legislative intervention to foster long-term trust in legal institutions. Justice today is no longer solely written by the pens of lawyers and the verdicts of judges—it is increasingly authored in lines of code and AI-generated prompts. This transformation is not merely technical; it is deeply philosophical, changing how we understand law, its origins, and the scope of accountability. The question is no longer 'Should we use AI?' but rather 'How do we use it in a way that ensures justice, protects truth, and preserves the irreplaceable role of human conscience in legal decision-making?' law is not a static script; it is the living spirit of justice. It must not be distorted by algorithms nor misled by artificial minds. Professional integrity must remain indivisible and untouchable, no matter how advanced the tools we wield.

Justice at stake as generative AI enters the courtroom
Justice at stake as generative AI enters the courtroom

Qatar Tribune

time24-06-2025

  • Qatar Tribune

Justice at stake as generative AI enters the courtroom

Agencies Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself. Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court. 'It's probably used more than people expect,' said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the U.S. legal system. 'Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it.' In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroomin the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists. 'I believe in forgiveness,' said a digital proxy of Pelkey created by his sister, Stacey Wales. The judge voiced appreciation for the avatar, saying it seemed authentic. 'I knew it would be powerful,' Wales told AFP, 'that that it would humanize Chris in the eyes of the judge.' The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss. Since the hearing, examples of GenAI being used in U.S. legal cases have multiplied. 'It is a helpful tool and it is time-saving, as long as the accuracy is confirmed,' said attorney Stephen Schwartz, who practices in the northeastern state of Maine. 'Overall, it's a positive development in jurisprudence.' Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks. 'You can't completely rely on it,' Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy. 'We are all aware of a horror story where AI comes up with mixed-up case things.' The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications. In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a 'collective debacle.' The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors. And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts. 'Courts need to be prepared to handle that,' Cleary said. Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient. 'We have a huge number of people who don't have access to legal services,' Linna said. 'These tools can be transformative; of course we need to be thoughtful about how we integrate them.'

Justice at stake as generative AI enters the courtroom
Justice at stake as generative AI enters the courtroom

Japan Today

time23-06-2025

  • Business
  • Japan Today

Justice at stake as generative AI enters the courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court By Thomas URBAIN Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself. Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court. "It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the U.S. legal system. "Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it." In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists. "I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales. The judge voiced appreciation for the avatar, saying it seemed authentic. "I knew it would be powerful," Wales told AFP, "that that it would humanize Chris in the eyes of the judge." The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss. Since the hearing, examples of GenAI being used in U.S. legal cases have multiplied. "It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine. "Overall, it's a positive development in jurisprudence." Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks. "You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy. "We are all aware of a horror story where AI comes up with mixed-up case things." The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications. In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle." The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors. And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts. "Courts need to be prepared to handle that," Cleary said. Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient. "We have a huge number of people who don't have access to legal services," Linna said. "These tools can be transformative; of course we need to be thoughtful about how we integrate them." Federal judges in the U.S. capital have written decisions noting their use of ChatGPT in laying out their opinions. "Judges need to be technologically up-to-date and trained in AI," Linna said. GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor. Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with. But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna. © 2025 AFP

Justice at stake as generative AI enters the courtroom
Justice at stake as generative AI enters the courtroom

Time of India

time19-06-2025

  • Time of India

Justice at stake as generative AI enters the courtroom

Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself. Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court. "It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system. "Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it." In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists. "I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales. The judge voiced appreciation for the avatar, saying it seemed authentic. "I knew it would be powerful," Wales told AFP, "that that it would humanize Chris in the eyes of the judge." The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss. Since the hearing, examples of GenAI being used in US legal cases have multiplied. "It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine. "Overall, it's a positive development in jurisprudence." Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks. "You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy. "We are all aware of a horror story where AI comes up with mixed-up case things." The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications. In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle." The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors. And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts. "Courts need to be prepared to handle that," Cleary said. Transformation Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient. "We have a huge number of people who don't have access to legal services," Linna said. "These tools can be transformative; of course we need to be thoughtful about how we integrate them." Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions. "Judges need to be technologically up-to-date and trained in AI," Linna said. GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor. Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with. But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store