
Lawyers Say AI Is Reshaping How—and Why—They Work Beyond Just Automation
When lawyers discuss artificial intelligence (AI), the conversation no longer centers solely on efficiency gains. A recent study commissioned by Ironclad, covering 800 attorneys and legal operations professionals, shows AI is evolving legal work in deeper ways, shaping not only how lawyers operate but also transforming why they remain in the profession despite the long hours, isolated workdays and taxing pursuit of perfection.
Lawyers often wear perfectionism like a badge of honor. But precision can come at a cost. "Lawyers by their very nature are overachievers... powered by, and steered towards, perfectionism. It's the perfect recipe for burnout," Jasmine Singh, general counsel at Ironclad, told Newsweek, recounting her six years spent working in Big Law.
Long hours of solitary and repetitive review, interspersed with a few high-drama moments like depositions or hearings, left Singh feeling disconnected. Over time, she recognized her fatigue for what it was: burnout. "I wasn't able to really step into my purpose for being a lawyer," Singh recalls.
She left for a time, teaching spin classes, fueling her passion for community and wellness. When she returned to practicing law a few months later, she decided to refocus her career on transactional in-house work, where she could work right alongside her team, day in and day out, helping build and protect companies.
Photo-illustration by Newsweek/Getty/Adobe Stock/Canva
Now, back in the legal world, Singh also found an unexpected ally: AI.
"In the last few years, I have extensively used AI," she said. "AI has become a coworker that helps when I need a brainstorming partner; an intern that helps do the rote or repeat work; a sounding board when I want to try a bunch of ideas and see what sticks." It hasn't just helped her do more with less, it's helped her breathe easier, with a safety net for those workday burdens.
And she's not alone. Ironclad's second annual State of AI in Legal Report revealed that 96 percent of respondents believe AI helps them meet business goals more efficiently, with 76 percent stating that it directly improves their burnout levels. Younger attorneys, particularly those in Gen Z, are at the forefront, with 91 percent agreeing that AI reduces burnout, compared to 75 percent of millennials.
Singh sees a generational shift in the intent behind legal work. Where older attorneys might strive for perfection, younger attorneys are deliberately seeking meaningful, focused and purposeful experiences, and AI is making that possible.
"I believe that Gen Z sees the virtue of AI because they have not internalized the message that we have to both be perfect and do perfect work; instead, they are creating the new message that our work has to be effective, impactful, and deliberate–and AI can help with all of those things," Singh said.
On the transactional side, AI is rewriting workflows.
Katelyn Canning, director and head of legal at fintech data analytics firm Ocrolus, runs a lean operation. "AI has transformed our legal operations by automating document review, cutting contract analysis time by 75 percent, and enabling our small legal team to manage workload equivalent to a department triple our size," she told Newsweek. That takeaway is powerful: a compact team handling the work of many, with AI powering productivity.
Canning emphasizes that AI-generated first drafts, like routine communications and agreements, allow her attorneys to shift from creators to strategic reviewers. In Ocrolus's highly regulated environment, that pivot elevated the legal department from a traditional cost center to a trusted business partner, speeding up deals and improving risk oversight.
The survey supports this. Respondents reported using AI most often for summarizing case law (61 percent), document and contract review (45 percent and 44 percent, respectively) and drafting legal documents (42 percent). More than half said AI opens time for strategic work.
But the change runs deeper. The survey found 64 percent reported AI improves communication, an essential part of legal coordination, and 46 percent see new career growth thanks to it. For in-house teams, that figure jumps to 55 percent.
Singh noted, "AI is unlocking these superpowers for lawyers—but core legal expertise is more important than ever now. We're going to see the lawyers with good judgment and curiosity rise to the top."
For Singh, AI hasn't just changed how she works, it's changed what's possible. With more time, more clarity and the right tools at her side, she's found a version of legal work that's not only more productive, but more sustainable.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Digital Trends
4 minutes ago
- Digital Trends
Digital Trends
Peter Horan has been at the forefront of every major technology revolution of the past fifty years. It starred with with the owners manual for Pong and selling computers in the floor of the West Coast Computer Faire. Since then he has published technology magazines and websites. His current passion is the uses of AI.


Time Magazine
an hour ago
- Time Magazine
How AI Adoption Is Sitting With Workers
T here's a danger to focusing primarily on CEO statements about AI adoption in the workplace, warns Brian Merchant, a journalist-in-residence at the AI Now Institute, an AI policy and research institute. 'There's a wide gulf between the prognostications of tech company CEOs and what's actually happening on the ground,' he says. Merchant in 2023 published Blood in the Machine, a book about how the historical Luddites resisted automation during the industrial revolution. In his substack newsletter by the same name, Merchant has written about how AI implementation is now reshaping work. To better understand workers' perspectives on how AI is changing jobs, we spoke with Merchant. Here are excerpts from our conversation, edited for length and clarity: There have been a lot of headlines recently about how AI adoption has led to headcount reductions. How do you define the AI jobs crisis? There is a real crisis in work right now, and AI poses a distinct kind of threat. But that threat to me, based on my understanding of technological trends in history, is less that we're looking at a widespread, mass-automation, job-wipe-out event and more at a particular set of logics that generative AI gives management and employers. There are jobs that are uniquely vulnerable. They might not be immense in number, but they're jobs that people think are pretty important—writing and artistic creation and that kind of thing. So you do have those jobs being threatened, but then we also have this crisis where AI supplies managers and bosses with this imperative where, whether or not the AI can replace somebody, it's still being pushed as a justification for doing so. We saw this a lot with DOGE and the hollowing out of the public workforce and the AI-first strategies that were touted over there. More often than facilitating outright job replacement, automation is used by bosses to break down tasks, deskill labor, or use as leverage against workers. This was true in the Luddites' time, and it's true right now. A lot of the companies that say they're 'AI-first' are merely taking the opportunity to reduce salaried headcount and replace it with cheaper, more precarious contract labor. This is what happened with Klarna, the fintech company that has famously been one of the most vocal advocates of AI anywhere. [Editor's note: In May, Klarna CEO Sebastian Siemiatkowski told Bloomberg that the company was reversing its well-publicized move to replace 700 human call-center workers with AI and instead hiring humans again. 'As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,' Siemiatkowski said.] After all, firms still need people to ensure the AI output is up to par, edit it, or to 'duct tape it' to make sure it works well enough with existing systems—bosses just figure they can take the opportunity to call that 'unskilled' work and pay the people who are doing it less. Your project, 'AI Killed My Job,' is an ongoing, multi-part series that dives deeper into how the AI jobs crisis is impacting workers day-to-day. What themes or patterns are emerging from those stories? I invited workers who have been impacted by AI to reach out and share their stories. The project has just begun, and I've already gotten hundreds of responses at this point. I expected to see AI being used as a tool by management to try to extract more labor and more value from people, to get people to work harder, and to have it kind of deteriorate conditions rather than replace work outright. That's been born out, and that's what I've seen. The first installment that I ran was around tech workers. Some people have the assumption that the tech industry is a little bit more homogeneous in its enthusiasm for AI, but that's really not the case. A lot of the workers who have to deal with them are not happy with AI and the way that AI is being used in their companies and the impact it's having on their work. There's a few people [included in the first installment] who have lost their jobs as part of layoffs initiated by a company that has an AI-first strategy, including at CrowdStrike and Dropbox, and I'm hearing from many people who haven't quite lost their jobs yet, but are exponentially concerned that they will. But, by and large, what you're seeing now is managers using AI to justify speeding up work, trying to get employees to use it to be more productive at the expense of quality or the things that people used to enjoy about their jobs. There are people who are frustrated to see management really encouraging the use of more AI at the expense of security or product quality. There's a story from a Google worker who watched colleagues feed AI-generated code into key infrastructures, which was pretty unsettling to many. That such an important and powerful company that runs such crucial web infrastructure would allow AI-generated code to be used in their systems with relatively few safeguards was really surprising. [Editor's note: A Google spokesperson said that the company actively encourages AI use internally, with roughly 30% of the company's code now being AI generated. They cited CEO Sundar Pichai's estimate that AI has increased engineering velocity by 10% but said that engineers have rigorous code review, security, and maintenance standards.] We're also seeing it being used to displace accountability, with managers using AI as a way to deflect blame should something go wrong, or, 'It's not my fault; it's AI's fault.' Your book, Blood in the Machine, tells the story of the historical Luddites' uprising against rising automation during the industrial revolution. What can we learn from that era that's still relevant today? One lesson we can learn from the Luddites is that we should be seeking ways to make more people and stakeholders involved in the process of developing and deploying technology. The Luddites were not anti-technology. They rose up and they smashed the machine because they had no other choice. The deck was stacked against them, and a lot of them were quite literally starving. Collective bargaining was illegal for them. And, just like today, conditions were increasingly difficult as the democratic levers that people can pull to demand a seat at the table were vanishingly few. (I mean, Silicon Valley just teamed up with the GOP to try and get an outright 10-year ban passed on states' abilities to regulate AI). That leads to strife, it leads to anger, it leads to feeling like you don't have a say or any options. Now, we're looking at artists and writers and content creators and coders and you name it, watching their livelihoods becoming more precarious with worsening conditions, if not getting erased outright. As you squeeze these more and more populations of people, then it's not unthinkable that you would see what happened then happen again in some capacity. You're already seeing the roots of that with people vandalizing Waymo cars, which they see as the agents of big tech and automation. That's a reason employers might want to consider that human element rather than putting the pedal to the metal with regards to AI automation because there's a lot of fear, anxiety, and anger at the way that all of this has taken shape and it's playing out. What should employers do instead? When it comes to employers, at the end of the day, if you're shelling out for a bunch of AI, then you're either hoping that your employees will use it to be more productive for you and work harder for you, or you're hoping to get rid of employees. Ideally, the employer would say it's the former. It would trust its employees to know how best to generate more value and make them more productive. In reality, even if a company goes that far, they can still turn around and trim labor costs elsewhere and mandate workers to use AI to pick up laid-off colleagues' workloads and ratchet up productivity. So what you really need is a union contract or something codified in law that you can't just fire people and replace them with AI. You see some union contracts that include language about the ways that AI or automation can be implemented and when it can't, and what the worker has say over. Right now, that is the best means of giving people power over a technology that's going to affect their working life. The problem with that is we have such low union density in the United States that it limits who can enjoy such a benefit to those who are sort of formally organized. There are also attempts at legislation that put checks on what automation can and can't touch, when AI can be used in the hiring process or what kinds of data it can collect. Overall, there has to be a serious check on the power of Silicon Valley before we can hope to get workers' voices heard in terms of how the technology's affecting them.

Wall Street Journal
an hour ago
- Wall Street Journal
AI's Overlooked $97 Billion Contribution to the Economy
The U.S. economy grew at an annual rate of 3% in the second quarter, which is great news. Does that mean artificial intelligence is delivering on its long-promised benefits? No, because gross domestic product isn't the best place to look for AI's contribution. Yet the official government numbers substantially underestimate the benefits of AI. First-quarter 2025 GDP was down an annualized 0.5%. Labor productivity growth ticked up a respectable but hardly transformative 2.3% in 2024, following a few lean years of gains and losses. Is AI overhyped?