Because when insurers move in, accountability follows.
When insurers start pricing a risk, it means that risk is no longer theoretical. It is frequent enough, measurable enough, and costly enough to build a business around. AI liability insurance is emerging as a new category of risk management for businesses using artificial intelligence. It usually reflects a broader shift – one where artificial intelligence is no longer just a tool, but a source of measurable business risk.
On March 18, 2026, HSB (Hartford Steam Boiler), a specialty insurer within Munich Re Group, launched AI liability insurance for small and medium-sized businesses. Munich Re, based in Munich, Germany, is one of the world’s largest reinsurers, helping define how global risks are priced.
“All types of businesses are using AI to do things more quickly and efficiently” said Timothy Zeilman, global head of product ownership for HSB, in a company statement. “At the same time, the AI transformation brings new legal and financial exposures. Business owners may wonder, am I protected? AI insurance helps remove that uncertainty by filling the gaps in coverage, so businesses can stay ahead of emerging risks.”
HSB AI Liability Insurance is designed to protect against key exposures for an organization’s use of AI including:
- Liability Due to Bodily Injury for lawsuits alleging a person is injured due to the insured’s use of AI. For example: the coverage may apply if an AI-controlled HVAC system creates condensation on the floor, and a person slips, falls, and is hurt.
- Liability Due to Property Damage for lawsuits claiming that property was damaged due to the insured’s use of AI. One possible situation: an employee of an appliance retailer uses an AI chat bot to generate instructions to install a dishwasher and causes a leak and extensive water damage.
- Personal and Advertising Injury Liability for legal actions claiming the insured’s AI tools violated a person’s right to privacy, slander and libel, or copyright infringement. The unauthorized use of content in a marketing brochure, for instance, or defamatory statements in a blog or social post would be covered.
It also covers AI-related losses that some general liability policies exclude, including bodily injury, property damage, and advertising injury for claims stemming from AI-generated advertising, marketing, blogs, and social media – a sign of how far AI liability has moved beyond purely digital harm. This builds on Munich Re’s earlier aiSure product, which since 2018 has insured AI model performance for developers – a fundamentally different risk than liability.
AI Insurance Is Not New – But It Is Expanding
AI insurance is not entirely new. Early solutions aimed at protecting against model underperformance and financial loss when AI systems fail to meet expected outcomes. Those earlier policies were largely focused on developers and technical performance. But now the scope is changing rapidly. The shift toward AI liability insurance reflects how AI risk is evolving from whether systems work as intended, to what happens when they cause harm. By extending coverage to small and medium-sized businesses, HSB is moving AI insurance beyond technical environments into everyday business use, where AI tools are increasingly embedded in operations such as customer service, hiring, and content generation.
AI Growing Market
HSB is not alone. Armilla AI launched a standalone AI liability policy in April 2025, underwritten by Chaucer at Lloyd’s, with coverage limits reported up to $25 million. This indicates that AI risk management is becoming a distinct, established category within the broader insurance industry and, hence, not a niche add-on.
According to Munich Re’s own materials, traditional insurance policies do not fully account for AI-specific risks, which often stem from probabilistic outputs, model drift, or unintended consequences of automated decisions. Unlike traditional software, which produces consistent and repeatable outputs, AI systems generate responses based on learned patterns, making their failures less predictable and harder to assess.
AI Risk Is Becoming Part of Business Operations
The broader pattern is acquainted. New technologies tend to move from innovation to widespread use and eventually into structured risk management. Insurance typically emerges in that later phase, when uncertainty becomes measurable enough to price.
AI appears to be entering that stage. According to HSB survey of 1,000 main street category businesses including real estate, manufacturing, professional services, legal, and financial services, 74% of small and medium-sized businesses are using AI programs and 91% planning to expand that use.
Yet the biggest barriers they report are not performance concerns but related to governance: data privacy risks and the lack of in-house AI expertise. This gap between adoption and accountability is exactly what this insurance category is being built for.
AI hiring tools, customer service bots, and content generators all carry legal liability and under EEOC guidance, that liability often sits with the business using the tool, not just the vendor that built it.
AI Regulation in Insurance is Already Here – Not Something that’s Coming
In the United States, the National Association of Insurance Commissioners (NAIC), the standard-setting body for state insurance regulators, is already shaping how AI risks are governed. In December 2023, the National Association of Insurance Commissioners (NAIC) formally adopted a Model Bulletin on the Use of Artificial Intelligence Systems by Insurers which requires insurance companies to establish written AI governance programs, document risk management controls, conduct audits, and demonstrate that AI-driven decisions comply with unfair trade practice laws and anti-discrimination standards, along with requirements for third-party vendor oversight.
As of August 2025, 24 states have adopted the bulletin or pursued related legislation, with a standardized AI evaluation tool for regulators currently in development.
The bulletin explicitly covers model drift, third-party AI vendor oversight, and generative AI which means any small and medium-sized businesses whose insurer uses AI to assess, price, or decline their coverage is already operating inside this framework, whether they know it or not.
This does not necessarily mean AI systems are becoming more or less risky. Rather, it suggests that their risks are becoming clearer, more frequent, and easier to define within existing financial frameworks. Given that cyber insurance reshaped how companies approach security, the same dynamic for AI is not a distant possibility. It is already underway.
AI may still be evolving. But its risks are beginning to look familiar enough to insure and that changes everything. And when the insurers move in, the accountability frameworks tend to follow.
Also Read: U.S. AI Stocks Swing on Export-Control Concerns and Geopolitical Tensions







