logo
#

Latest news with #Armilla

Artificial intelligence insurance? This startup in Canada will cover the costs of AI mistakes
Artificial intelligence insurance? This startup in Canada will cover the costs of AI mistakes

The Star

time20-05-2025

  • Business
  • The Star

Artificial intelligence insurance? This startup in Canada will cover the costs of AI mistakes

Armilla cited a 2024 incident where Air Canada was using an AI chatbot as part of its customer service system, and the AI completely fabricated a discount which it offered to customers – a judge then ruled that the airline had to honour the offer. — Reuters Lloyds of London, acting through a Toronto-based startup called Armilla, has begun to offer a new type of insurance cover to companies for the artificial intelligence era: Its new policy can help cover against losses caused by AI. While Lloyds, and its partner, are simply capitalising on the AI trend – in the same way they'd insure against other new phenomena, in an effort to drive their own revenues – the move is a reminder that AI is both powerful and still a potential business risk. And if you thought adopting AI tools would help you push down the cost of operating your business, the advent of this policy is also a reminder that you need to check if AI use might actually bump some of your costs (like insurance) up. Armilla's policy is intended to help offset the cost of lawsuits against a particular company if it's sued by, say, a customer or a third party claiming harm thanks to an AI product, the Financial Times noted. The idea is to cover costs that could include payouts related to AI-caused damages and legal fees associated with any such lawsuit. Armilla's CEO told the newspaper that the new insurance product may have an upside beyond protecting companies against certain AI losses. Karthik Ramakrishnan said he thinks it could even boost AI adoption rates because some outfits are reluctant to embrace the innovative new technology over fears that tools like chatbots will malfunction. Armilla cited a 2024 incident where Air Canada was using an AI chatbot as part of its customer service system, and the AI completely fabricated a discount which it offered to customers – a judge then ruled that the airline had to honour the offer. The Lloyds-backed insurance policy would likely have offset some of these losses had the chatbot been deemed to have underperformed. But it's not a blanket policy, the FT noted, and the company wouldn't offer to cover risky or error-prone AIs – like any insurer wary of covering a 'lemon.' Ramakrishnan explained the policy is offered once an AI model is assessed and the company is 'comfortable with its probability of degradation,' and then will only pay out compensation if the 'models degrade.' The FT also noted that some other insurers already build in cover for certain AI-connected losses as part of broader technology error policies, though these may include much more limited payouts than for other tech-related issues. The consequences of a company acting on hallucinated information from an AI, where an AI just makes up a fake answer but tries to pass it off as truth, can be severe, 'leading to flawed decisions, financial losses, and damage to a company's reputation,' says industry news site PYMNTS. The outlet also noted that there are serious questions of accountability that may arise when an AI is responsible for this kind of error. This sentiment echoes the warnings made by MJ Jiang, chief strategy officer at New York-based small business lending platform Credibly. In a recent interview with Inc, Jiang said that companies are at risk of serious legal consequences from AI hallucination-based errors, because you 'cannot eliminate, only mitigate, hallucinations.' Companies using the tech should ask themselves who will get sued when an error is made by an AI? Jiang said they should have mitigation procedures in place to prevent such errors in the first place. In fact, she thinks that ' because GenAI cannot explain to you how it came up with the output, human governance will be essential in businesses where the use cases are of higher risk to the business.' Other business experts have also warned that using AI is not a risk-free endeavor and have issued guidance on how to prepare businesses for AI compliance and any subsequent legal issues. Keeping these issues in mind when preparing your AI budget is a good idea. – Inc./Tribune News Service

AI hallucination puts firms at risk? New insurance covers legal costs
AI hallucination puts firms at risk? New insurance covers legal costs

Business Standard

time12-05-2025

  • Business
  • Business Standard

AI hallucination puts firms at risk? New insurance covers legal costs

Insurers at Lloyd's of London have introduced a new insurance product designed to protect businesses from financial losses arising from artificial intelligence system failures, according to a report by The Financial Times. The insurance, developed by Y Combinator-backed start-up Armilla, provides coverage for legal claims against companies when AI tools generate inaccurate outputs. The policy offers financial protection against potential legal consequences, including court-awarded damages and associated legal expenses. It responds to rising concerns over AI's tendency to produce unreliable or misleading information—commonly referred to as "hallucinations" in AI terminology. As companies increasingly integrate AI tools to enhance efficiency, they also face growing risks from errors caused by flaws in AI models that lead to hallucinations or fabricated information. Last year, a tribunal ruled that Air Canada must honour a discount its customer service chatbot had wrongly offered. What is an AI hallucination? An AI hallucination occurs when an algorithm generates information that appears credible but is actually false or misleading. Computer scientists use the term to describe such errors, which have been seen in various AI tools. These hallucinations can cause significant problems when AI is used in sensitive areas. While some errors are relatively harmless—such as a chatbot giving a wrong answer—others can have serious consequences. In high-stakes settings like legal cases or health insurance decisions, inaccuracies can severely impact people's lives. Unlike systems that follow strict, human-defined rules, AI models operate based on statistical patterns and probabilities, which makes occasional errors inevitable. Though minor mistakes may not pose a big problem for most users, hallucinations become critical when dealing with legal, medical, or confidential business matters. Karthik Ramakrishnan, Armilla's chief executive, said the new product could encourage more companies to adopt AI by addressing fears that tools like chatbots might break down or make errors. Hallucinations getting worse despite AI advances Despite improvements by companies like OpenAI and Google in reducing hallucination rates, the problem has worsened with the introduction of newer reasoning models. OpenAI's internal assessments found that its latest models hallucinate more often than earlier versions. Specifically, OpenAI reported that its most advanced model, o3, produced hallucinations 33 per cent of the time on the PersonQA benchmark, which tests the ability to answer questions about public figures—more than double the rate of its earlier model, o1.

Insurers launch cover for losses caused by AI chatbot errors
Insurers launch cover for losses caused by AI chatbot errors

Business Mayor

time11-05-2025

  • Business
  • Business Mayor

Insurers launch cover for losses caused by AI chatbot errors

Stay informed with free updates Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox. Insurers at Lloyd's of London have launched a product to cover companies for losses caused by malfunctioning artificial intelligence tools, as the sector aims to profit from concerns about the risk of costly hallucinations and errors by chatbots. The policies developed by Armilla, a start-up backed by Y Combinator, will cover the cost of court claims against a company if it is sued by a customer or another third party who has suffered harm because of an AI tool underperforming. The insurance will be underwritten by several Lloyd's insurers and will cover costs such as damages payouts and legal fees. Companies have rushed to adopt AI to boost efficiency but some tools, including customer service bots, have faced embarrassing and costly mistakes. Such mistakes can occur, for example, because of flaws which cause AI language models to 'hallucinate' or make things up. Virgin Money apologised in January after its AI-powered chatbot reprimanded a customer for using the word 'virgin', while courier group DPD last year disabled part of its customer service bot after it swore at customers and called its owner the 'worst delivery service company in the world'. A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up. Armilla said that the loss from selling the tickets at a lower price would have been covered by its insurance policy if Air Canada's chatbot was found to have performed worse than expected. Karthik Ramakrishnan, Armilla chief executive, said the new product could encourage more companies to adopt AI, since many are currently deterred by fears that tools such as chatbots will break down. Some insurers already include AI-related losses within general technology errors and omissions policies, but these generally include low limits on payouts. A general policy that covers up to $5mn in losses might stipulate a $25,000 sublimit for AI-related liabilities, said Preet Gill, a broker at Lockton, which offers Armilla's products to its clients. AI language models are dynamic, meaning they 'learn' over time. But losses from errors caused by this process of adaptation would not normally be covered by typical technology errors and omissions policies, said Logan Payne, a broker at Lockton. A mistake by an AI tool would not on its own be enough to trigger a payout under Armilla's policy. Instead, the cover would kick in if the insurer judged that the AI had performed below initial expectations. For example, Armilla's insurance could pay out if a chatbot gave clients or employees correct information only 85 per cent of the time, after initially doing so in 95 per cent of cases, the company said. 'We assess the AI model, get comfortable with its probability of degradation, and then compensate if the models degrade,' said Ramakrishnan. Tom Graham, head of partnership at Chaucer, an insurer at Lloyd's that is underwriting the policies sold by Armilla, said his group would not sign policies covering AI systems they judge to be excessively prone to breakdown. 'We will be selective, like any other insurance company,' he said. Read More Tell us: have you been affected by the US insurance crisis?

Armilla Launches Affirmative AI Liability Insurance with Lloyd's Underwriter, Chaucer
Armilla Launches Affirmative AI Liability Insurance with Lloyd's Underwriter, Chaucer

Cision Canada

time30-04-2025

  • Business
  • Cision Canada

Armilla Launches Affirmative AI Liability Insurance with Lloyd's Underwriter, Chaucer

TORONTO, April 30, 2025 /CNW/ - Armilla Insurance Services (Armilla), Coverholder at Lloyd's, today announced the launch of its AI Liability Insurance policy. Underwritten by certain underwriters at Lloyd's, including Chaucer, this pioneering policy provides affirmative coverage for AI. "Businesses are racing to deploy AI, but their risk management and insurance tools haven't kept pace. There's a growing concern of 'silent AI cover' – the uncertainty of whether existing policies will respond to AI-specific failures, potentially mirroring the early, costly lessons of cyber risk," said Karthik Ramakrishnan, CEO of Armilla. "Our AI Liability Insurance provides clear, affirmative coverage. It's built from the ground up to address the specific ways AI can fail, giving businesses the confidence to innovate responsibly." As AI adoption continues to surge, AI incidents have spurred over 150 lawsuits in the US in the last 5 years per researchers at George Washington University. AI regulations are proliferating and scrutiny intensifying, it's essential for AI adopters to protect themselves—regardless of fault. Traditional insurance policies fall short on AI. Tech E&O policies often lack sufficient limits, place the ensuing liabilities on the user, and do not extend to customized models. Meanwhile, many enterprises lack blanket E&O coverage, leaving their internally developed AI solutions vulnerable to ambiguous legacy policy wording, potential disputes, and future exclusions. Armilla's AI Liability Insurance addresses key AI-related exposures by innovating a novel, broadly affirmative trigger around the risk of the underperformance of AI applications, such as the failure of the AI solution to perform as intended, generate critical errors, hallucinations or inaccuracies leading to damages. Armilla's policy responses by covering legal costs and liabilities caused by AI failures. "The proliferation of AI technology creates novel challenges that demand innovative insurance solutions," commented Nasra Ahmed, Senior Innovation Manager at Chaucer. "Existing frameworks often leave businesses exposed. We are proud to partner with Armilla, leveraging their distinct technical insight to provide the market with a product that offers much-needed clarity and certainty for companies navigating the complexities of AI adoption." By combining high aggregate limits with forward-thinking underwriting, Armilla and Chaucer are redefining how the insurance industry supports responsible AI development. [ ] About Armilla: Armilla, coverholder at Lloyd's is the world's only managing general agent (MGA) exclusively focused on AI insurance. Armilla combines deep technical expertise in assessing AI risk with innovative insurance solutions. By innovating AI-specific insurance products, Armilla empowers AI pioneers to confidently deploy AI. For more information, visit Chaucer are a leading specialty (re)insurance group working with brokers, coverholders and clients to protect and support business activities around the world. Our services are accessed both through Lloyd's of London and the company markets. We are defined by an enterprising, bespoke approach to (re)insurance, enabled by the individual character, experience and imagination of our expert teams. Chaucer is a member of the China Re Group and backed by their financial and operational resources. China Re is one of the world's largest reinsurance companies who outstanding and comprehensive strength is rated A (excellent) by AM Best and A (strong) by S&P Global Rating.

Armilla Launches Affirmative AI Liability Insurance with Lloyd's Underwriter, Chaucer
Armilla Launches Affirmative AI Liability Insurance with Lloyd's Underwriter, Chaucer

Associated Press

time30-04-2025

  • Business
  • Associated Press

Armilla Launches Affirmative AI Liability Insurance with Lloyd's Underwriter, Chaucer

TORONTO, April 30, 2025 /PRNewswire/ - Armilla Insurance Services (Armilla), Coverholder at Lloyd's, today announced the launch of its AI Liability Insurance policy. Underwritten by certain underwriters at Lloyd's, including Chaucer, this pioneering policy provides affirmative coverage for AI. 'Businesses are racing to deploy AI, but their risk management and insurance tools haven't kept pace. There's a growing concern of 'silent AI cover' – the uncertainty of whether existing policies will respond to AI-specific failures, potentially mirroring the early, costly lessons of cyber risk,' said Karthik Ramakrishnan, CEO of Armilla. 'Our AI Liability Insurance provides clear, affirmative coverage. It's built from the ground up to address the specific ways AI can fail, giving businesses the confidence to innovate responsibly.' As AI adoption continues to surge, AI incidents have spurred over 150 lawsuits in the US in the last 5 years per researchers at George Washington University. AI regulations are proliferating and scrutiny intensifying, it's essential for AI adopters to protect themselves—regardless of fault. Traditional insurance policies fall short on AI. Tech E&O policies often lack sufficient limits, place the ensuing liabilities on the user, and do not extend to customized models. Meanwhile, many enterprises lack blanket E&O coverage, leaving their internally developed AI solutions vulnerable to ambiguous legacy policy wording, potential disputes, and future exclusions. Armilla's AI Liability Insurance addresses key AI-related exposures by innovating a novel, broadly affirmative trigger around the risk of the underperformance of AI applications, such as the failure of the AI solution to perform as intended, generate critical errors, hallucinations or inaccuracies leading to damages. Armilla's policy responses by covering legal costs and liabilities caused by AI failures. 'The proliferation of AI technology creates novel challenges that demand innovative insurance solutions,' commented Nasra Ahmed, Senior Innovation Manager at Chaucer. 'Existing frameworks often leave businesses exposed. We are proud to partner with Armilla, leveraging their distinct technical insight to provide the market with a product that offers much-needed clarity and certainty for companies navigating the complexities of AI adoption.' By combining high aggregate limits with forward-thinking underwriting, Armilla and Chaucer are redefining how the insurance industry supports responsible AI development. [ ] About Armilla: Armilla, coverholder at Lloyd's is the world's only managing general agent (MGA) exclusively focused on AI insurance. Armilla combines deep technical expertise in assessing AI risk with innovative insurance solutions. By innovating AI-specific insurance products, Armilla empowers AI pioneers to confidently deploy AI. For more information, visit About Chaucer: Chaucer are a leading specialty (re)insurance group working with brokers, coverholders and clients to protect and support business activities around the world. Our services are accessed both through Lloyd's of London and the company markets. We are defined by an enterprising, bespoke approach to (re)insurance, enabled by the individual character, experience and imagination of our expert teams. Chaucer is a member of the China Re Group and backed by their financial and operational resources. China Re is one of the world's largest reinsurance companies who outstanding and comprehensive strength is rated A (excellent) by AM Best and A (strong) by S&P Global Rating. For more information, visit View original content to download multimedia: SOURCE Armilla AI Inc.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store