20-05-2025
Artificial intelligence insurance? This startup in Canada will cover the costs of AI mistakes
Armilla cited a 2024 incident where Air Canada was using an AI chatbot as part of its customer service system, and the AI completely fabricated a discount which it offered to customers – a judge then ruled that the airline had to honour the offer. — Reuters
Lloyds of London, acting through a Toronto-based startup called Armilla, has begun to offer a new type of insurance cover to companies for the artificial intelligence era: Its new policy can help cover against losses caused by AI.
While Lloyds, and its partner, are simply capitalising on the AI trend – in the same way they'd insure against other new phenomena, in an effort to drive their own revenues – the move is a reminder that AI is both powerful and still a potential business risk. And if you thought adopting AI tools would help you push down the cost of operating your business, the advent of this policy is also a reminder that you need to check if AI use might actually bump some of your costs (like insurance) up.
Armilla's policy is intended to help offset the cost of lawsuits against a particular company if it's sued by, say, a customer or a third party claiming harm thanks to an AI product, the Financial Times noted. The idea is to cover costs that could include payouts related to AI-caused damages and legal fees associated with any such lawsuit.
Armilla's CEO told the newspaper that the new insurance product may have an upside beyond protecting companies against certain AI losses. Karthik Ramakrishnan said he thinks it could even boost AI adoption rates because some outfits are reluctant to embrace the innovative new technology over fears that tools like chatbots will malfunction.
Armilla cited a 2024 incident where Air Canada was using an AI chatbot as part of its customer service system, and the AI completely fabricated a discount which it offered to customers – a judge then ruled that the airline had to honour the offer. The Lloyds-backed insurance policy would likely have offset some of these losses had the chatbot been deemed to have underperformed. But it's not a blanket policy, the FT noted, and the company wouldn't offer to cover risky or error-prone AIs – like any insurer wary of covering a 'lemon.'
Ramakrishnan explained the policy is offered once an AI model is assessed and the company is 'comfortable with its probability of degradation,' and then will only pay out compensation if the 'models degrade.' The FT also noted that some other insurers already build in cover for certain AI-connected losses as part of broader technology error policies, though these may include much more limited payouts than for other tech-related issues.
The consequences of a company acting on hallucinated information from an AI, where an AI just makes up a fake answer but tries to pass it off as truth, can be severe, 'leading to flawed decisions, financial losses, and damage to a company's reputation,' says industry news site PYMNTS. The outlet also noted that there are serious questions of accountability that may arise when an AI is responsible for this kind of error.
This sentiment echoes the warnings made by MJ Jiang, chief strategy officer at New York-based small business lending platform Credibly. In a recent interview with Inc, Jiang said that companies are at risk of serious legal consequences from AI hallucination-based errors, because you 'cannot eliminate, only mitigate, hallucinations.'
Companies using the tech should ask themselves who will get sued when an error is made by an AI? Jiang said they should have mitigation procedures in place to prevent such errors in the first place. In fact, she thinks that ' because GenAI cannot explain to you how it came up with the output, human governance will be essential in businesses where the use cases are of higher risk to the business.'
Other business experts have also warned that using AI is not a risk-free endeavor and have issued guidance on how to prepare businesses for AI compliance and any subsequent legal issues. Keeping these issues in mind when preparing your AI budget is a good idea. – Inc./Tribune News Service