logo
#

Latest news with #KiraSystems

Taming the AI ‘beast' without losing ourselves
Taming the AI ‘beast' without losing ourselves

Business Times

timea day ago

  • Business
  • Business Times

Taming the AI ‘beast' without losing ourselves

THE rise of artificial intelligence (AI) in the workplace is a double-edged sword. On the one hand, it promises unparalleled efficiency, cost savings and innovation. On the other hand, it fuels anxiety, job insecurity and mental strain for millions of workers. As AI continues its relentless march into every corner of the workplace, the psychological toll on employees cannot be ignored. The question is no longer whether AI will reshape work (it already has), but how we can harness its power without sacrificing human well-being. The challenge is not insignificant. The beauty: AI's promise AI's transformative potential is undeniable. In law, tools such as Ross Intelligence and Casetext analyse legal precedents in seconds, saving lawyers hours of painstaking research. AI-driven contract platforms such as LexisNexis and Kira Systems flag risks and suggest edits with near-human precision. For accountants, AI automates data entry, compliance checks and even audit sampling, reducing errors and freeing up time for higher-value work. A NEWSLETTER FOR YOU Friday, 3 pm Thrive Money, career and life hacks to help young adults stay ahead of the curve. Sign Up Sign Up The gains are real. It is estimated that lawyers in the US, thanks to AI, could reclaim 266 million hours of billable time a year – roughly US$100,000 in additional annual revenue per lawyer. Similar efficiencies ripple across industries, from healthcare to finance. AI doesn't just streamline tasks; it redefines what's possible. But this efficiency has a human cost. The uncomfortable (unarticulated) challenge is clear: These lawyers must now quickly uncover new value-added services to replace work that AI performs more quickly and cheaply. The beast: AI's psychological toll The dark side of AI's workplace revolution is the pervasive fear of obsolescence for the individual. In 2023, Pew Research found that 62 per cent of workers worry that AI could replace their jobs. Goldman Sachs estimates that 300 million jobs worldwide may be affected by AI and its algorithmic automation potential. The disruption is widespread, affecting low-skilled roles as well as professionals in law, accounting and even creative fields. This AI challenge has a stark impact on individuals' mental health. Chronic job insecurity breeds stress, depression and burnout. The American Psychological Association links automation anxiety to decreased job satisfaction and heightened workplace tension. A 2023 study by the Organization for Economic Co-operation and Development tied rapid upskilling demands to rising burnout rates, while Gallup found 48 per cent of workers feel overwhelmed by the pace of technological change, finding it hard to keep up – much less compete – with AI. For those who do lose jobs to AI, the consequences are even grimmer. University of Cambridge research shows that communities hit by AI automation experience higher rates of substance abuse and suicide. Unemployed individuals are twice as likely to suffer mental health disorders, based on research from the University of Erlangen-Nuremberg. Put simply, AI's efficiency gains come with a high yet hidden tax on human well-being. Taming the beast: mitigation strategies So, how do we reconcile AI's benefits with its human costs? The answer lies in proactive, multipronged strategies that prioritise both productivity and mental health. Upskilling is often touted as the antidote to AI-driven job loss. Companies such as Amazon and Google have invested billions in training programmes – Amazon's 'Upskilling 2025', for instance, pledged US$1.2 billion towards AI and cloud computing education for its employees. These initiatives are critical, but insufficient on their own. The pressure to constantly reskill can be a source of stress in itself. This is similar to 'technostress', the strain experienced by employees in digital fields who must continuously learn new software and tools. Reskilling programmes must be paired with career and personal counselling, flexible timelines and realistic expectations. Otherwise, we risk trading job insecurity for burnout. Employers must recognise and treat AI-related stress as a workplace hazard. Access to mental health programmes, therapists and peer support networks can help employees navigate this AI-induced uncertainty. A 2023 Deloitte report highlighted that companies investing in mental health saw not just happier employees, but higher productivity. Workers in organisations with robust mental health support saw a 30 per cent drop in absenteeism. Transparency and candour in the workplace are also key. Workers need clear communication about how AI will be integrated in the workplace, which roles may change, and how the company plans to support them. The principle behind this approach is simple: Uncertainty fuels anxiety, whereas clarity fosters trust. Preserving human connection Unfortunately, AI's rise has coincided with a decline in workplace socialisation. Chatbots, virtual assistants and remote work tools reduce in-person interaction, exacerbating employee isolation. The American Psychological Association notes that remote workers relying heavily on AI report higher levels of isolation and loneliness. Employers should design workflows that balance automation with human collaboration. Hybrid models, team-building activities and 'AI-free' zones can help maintain needed social bonds. After all, productivity is not just about output; it is also about people. But corporate initiatives alone will not solve the systemic challenges. Policymakers must step in with stronger social safety nets – universal healthcare, unemployment benefits tailored for displaced workers, and incentives for companies to retain human labour in the workforce. Ethical AI frameworks are also essential. Tech developers should prioritise tools that augment human work rather than replace it outright. The goal should be partnership and optimisation, not displacement. This may seem idealistic, but it is critical to the individual employee, the community and, ultimately, society. Finding a way AI is here to stay. The choice is not between embracing it or rejecting it. Rather, it is about shaping its integration with humanity in mind. OpenAI's Sam Altman mused about the potential for a one-person, billion-dollar company powered by AI, but we must ask: At what cost? This is not a zero-sum game. AI can drive progress without eroding mental well-being, but only if we act deliberately and intentionally. Employers, policymakers and tech leaders must collaborate to ensure that the AI revolution lifts people up rather than leaves them behind. The stakes are high. If we fail, we risk winning the battle for efficiency (and technology), but losing the war for human mental wellness and relevance in the workplace. Should this happen, it would be a human tragedy of epic proportions – and one entirely of our own doing. The writer is the group general counsel of Jardine Cycle & Carriage, a member of the Jardine Matheson Group. He sits on several commercial boards, including that of the charity Jardines Mindset, which focuses on mental health, and the global guiding council of the US mental health charity One Mind at Work.

Re-Deploying Everything In Legal
Re-Deploying Everything In Legal

Forbes

time25-07-2025

  • Business
  • Forbes

Re-Deploying Everything In Legal

John Arsneault is the CIO of the law firm Goulston & Storrs and the founder of venture capital company Portfolio X. getty Highly customizable AI-enabled software, targeted at automating workloads that replace repetitive human-led tasks, will upend all existing software in the Legal industry (and all industries, for that matter). While the timeframe for this transformation is unknown, the process has already begun. Most of the existing software base, including current SaaS platforms, are unlikely to deliver the benefits of the AI-native tools and will thus become relics of the past. Deploying AI tools, targeted at native workload automation from their inception, should prove to be a streamlined evolution versus waiting for existing software to be modified and adjusting workloads upon platform upgrade availability. In recent years, law firms and corporate legal departments have gravitated toward artificial intelligence (AI) as a means to streamline repetitive, manual tasks. These highly customizable AI-enabled platforms are transforming legal workflow by automating document drafting, contract review, research, compliance and even client intake. While firms are increasingly adopting AI—five times faster than cloud systems—ethical, accuracy and data-security considerations remain challenges. Today's AI tools can be configured according to firm-specific processes, policies and templates. For instance, contract automation platforms allow law firms to upload preferred clause libraries, set redlining rules and define strategic guardrails. Tools like LawGeex enable legal teams to enforce custom playbooks for consistent contract review. AI tools such as Voiceflow make it easy to build branded client‑facing assistants via no-code interfaces. Next-gen platforms seamlessly integrate with existing systems—CRMs, calendars, document management, e‑signature services—ensuring legal AI becomes part of a unified tech ecosystem. for instance, connects with 3,000-plus platforms and lets teams customize tone, priority tasks and escalation paths. Core Use Cases In Legal Automation Document Review And Contract Lifecycle Management (CLM) Contract review is a central use case. AI systems like LawGeex, Kira Systems, and Pocketlaw quickly surface missing clauses, risk language or outdated templates. Full CLM suites—such as Actionstep, Agiloft and ContractExpress—automate everything from clause extraction and drafting to storage and e-signature. Legal Research Natural language processing (NLP) tools like Ross Intelligence and Casetext CoCounsel scan statutes, opinions and regulatory updates to provide cited answers within minutes. These tools are trained on specific jurisdictions and integrate RAG (retrieval-augmented generation) to ensure accuracy. Platforms like Everlaw automate document tagging, predictive coding and storytelling in litigation contexts. Lex Machina uses historic case data to forecast litigation outcomes, helping lawyers make better strategic decisions. Compliance And Monitoring AI continuously tracks regulatory changes, flags non‑compliant clauses and alerts firms of risk exposure across jurisdictions. Enidia AI specializes in contract compliance checks across multiple regulatory frameworks. Client Intake And Communication Chatbots, e‑receptionists, and virtual legal assistants like and Voiceflow bots handle client intake, appointment booking, FAQs and reminders, freeing staff from repetitive admin work. Billing, Time Tracking And Case Management AI-integrated systems (e.g., Clio, automatically log hours, generate invoices, assign tasks and notify attorneys of deadlines—a major leap from manual tracking. Leading Platforms and Their Capabilities • Ironclad: A CLM platform leveraging GPT-3/4 to auto-scan contracts, extract terms and assist redlining; supports custom workflows and integrates with cloud storage. • Harvey AI: Built atop GPT-4 and designed specifically for legal environments; offers custom LLMs tailored to fans of legal practices and integrates deeply with document archives. • LawGeex, Paxton Legal AI, Spellbook AI, LegalRobot: Specialized in contract review, drafting support, risk-flagging and clause comparisons; each offers policy-driven custom settings. • ContractExpress: Template-based drafting tool with intelligent questionnaires; widely used by law firms and corporations. • Agiloft, Actionstep: Customizable practice-management tools featuring document automation, workflow controls and e-sign capabilities. • UiPath: A leading robotic process automation (RPA) platform used to automate repetitive tasks (e.g., moving files, updating systems) and integrate AI capabilities in broader business processes. Adoption Drivers And ROI Efficiency Gains Firms can expect reductions in time spent summarizing intake notes and performing discovery workflows. Quality And Consistency Rule-based contract tools enforce policy adherence, reduce human error and generate consistent outputs—even when deployed across geographies. Strategic Legal Work AI frees attorneys (especially associates) from administrative burden, allowing them to focus on higher-value, strategic functions. Risks, Mitigation And Ethical Concerns Hallucinations And Accuracy AI can hallucinate, producing plausible but false citations. Morgan & Morgan faced sanctions after relying on fabricated case law. To combat this, most firms employ RAG-based systems and require human validation. Data Privacy And Security Client confidentiality is paramount. Firms are implementing robust data policies, including private-model deployments, zero-data retention and SOC 2 / GDPR / CCPA compliance. Ethical Adoption Some argue that not using AI could be unethical if it deprives clients of efficient representation. Several bar associations are debating whether lawyers are obligated to integrate AI responsibly. The Future Of Legal Automation • AI-Powered Agents: Firms have aimed to develop agentic AI that can manage entire legal workflows—from due diligence through drafting to review—acting akin to junior lawyers. • Proprietary LLMs: AM Law 100 firms and others are building in-house LLMs to keep sensitive data secure while enabling customization. • Global Expansion: Tools like Harvey have shown some global adoption. • Regulatory Compliance: Expect more regulation, standardization and skill certification as AI use becomes more common in law. Conclusion Customizable AI tools are rapidly reshaping legal work. By automating repetitive manual tasks—drafting, review, research, billing—they enable more accurate, faster and lower-cost legal services. However, realizing these benefits demands careful tool selection, integration, oversight and governance. Adoption must balance innovation with risk, ensuring systems are accurate, secure and used ethically. Firms that get the balance right will gain not only substantial operational efficiency but also strategic advantage—and offer potentially significantly improved access to justice. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store