Executive summary – what changed and why it matters

Major commercial insurers (including AIG, Great American and W.R. Berkley) have requested regulators permit explicit exclusions for AI‑related liabilities in corporate insurance policies. In plain terms: insurers are trying to limit or opt out of covering losses arising from large AI models because they see those losses as opaque, fast‑changing and potentially systemic. This shift can materially slow enterprise AI adoption, raise operating costs for firms that rely on AI, and force new regulatory and contractual controls in 2025.

  • Substantive change: leading insurers are seeking regulatory approval to carve AI liabilities out of standard coverage.
  • Why it matters: exclusions shift loss-bearing to corporations, vendors, or alternative risk markets and reduce insurers’ ability to pool AI exposures.
  • Near-term impact: expect narrower policy language, growth in affirmative AI products, higher retention/self‑insurance, and pressure for AI risk disclosures.

Breaking down the announcement

The core ask to regulators is procedural but consequential: allow insurers to explicitly exclude AI‑related claims so they are not forced to pay for ambiguous harms triggered by models. Insurers frame this as a risk‑management necessity – citing model unpredictability, scant actuarial history, and recent high‑profile failures (examples cited publicly include events tied to Google, Air Canada and Arup) that illustrate potential for simultaneous, correlated losses.

Industry context and quantitative signals

This move follows industry research showing AI jumped to the top of insurers’ risk lists in 2025: a survey of ~170 insurance experts ranked AI as the number‑one sector risk, with more than 85% expecting its impact to grow materially over five years. Insurers already face limited historical loss data for AI exposures, and they warn that a single model failure could produce many claims across multiple sectors-creating a concentration risk that standard diversification and reinsurance strategies struggle to contain.

What insurers are doing instead

There are three concrete responses in market behavior:

  • Broad AI exclusions in legacy policies to avoid ambiguous coverage.
  • Launch of affirmative AI insurance products — examples include Munich Re’s aiSure and new Lloyd’s‑market offerings (Armilla) that explicitly underwrite AI perils with tailored terms.
  • Underwriting tightening: higher retentions, stricter representation and warranty clauses, and requirement for demonstrable model governance and logging prior to coverage.

What this means for buyers and operators

If regulators permit exclusions, firms using AI should expect four immediate consequences: higher self‑insurance costs, more contractual risk transfer to vendors, reduced availability of D&O and professional liability coverage for AI‑related suits, and increased due diligence demands from underwriters. Companies that can’t demonstrate robust model governance, validation, and incident logging will face either exclusion, steep premiums, or both.

Risks and governance considerations

Key governance concerns include liability ambiguity across developer, deployer and integrator boundaries; potential for correlated losses across clients using similar third‑party models; and regulatory fragmentation as states or nations diverge on allowed exclusions. From a compliance perspective, the shift raises questions about solvency and consumer protection if insurers shed exposures without parallel regulatory guardrails.

Where this sits relative to alternatives

Affirmative AI policies are the market alternative to exclusion: they create priced coverage with specified limits but currently come with narrower scopes and higher cost. Reinsurance and capital‑markets solutions (cat bonds, insurance‑linked securities) may absorb some systemic exposure over time, but they require standardized metrics and loss definitions that the market doesn’t yet have.

Concrete recommendations — who should act and how

  • Risk officers: Immediately inventory AI dependencies, locate policy language for AI exclusions, and quantify potential retained losses under exclusion scenarios.
  • CISOs/Model ops teams: Implement continuous model logging, versioning, explainability and post‑deployment monitoring — these are now primary underwriting signals.
  • Legal/commercial teams: Shift procurement: push for vendor indemnities, stricter SLAs, and explicit allocation of AI risk in contracts.
  • Insurers & regulators: Collaborate to define standard loss taxonomies and reporting metrics so affirmative cover can scale without surprise concentrations.

Bottom line: this is a market correction, not an apocalypse. But it changes incentives immediately — firms that can demonstrate mature AI governance will retain access to capital markets and insurance at manageable cost; those that cannot will face higher costs or coverage gaps. Triage now: find your AI exposure points, revise contracts, and prioritize monitoring and audit trails before underwriting seasons tighten further.