The surprise isn’t that insurers are nervous about AI. It’s where they’re drawing the line. Cyber carriers, for the moment, are largely holding firm; some are even adding affirmative endorsements for AI-related incidents. The exclusions are landing elsewhere: directors and officers (D&O), errors and omissions (E&O), employment practices, fiduciary, and crime. If your renewal packet includes new language about “artificial intelligence” or “generative AI,” you’re seeing the same pattern. Carriers are carving out AI risk from the policies that protect management and professional services—often in ways that go well beyond a narrow carve for “AI as a product.” The result is coverage that can disappear for claims that are only loosely tied to how you use or describe AI.
The Language That’s Actually Appearing
Two forms have drawn the most attention. Berkley has introduced what’s been called an “absolute” AI exclusion for D&O, E&O, and fiduciary liability. The endorsement bars coverage for any claim “based upon, arising out of, or attributable to” the actual or alleged use, deployment, or development of artificial intelligence. The list of examples is long: AI-generated content, failure to detect third-party AI content, inadequate AI governance or training, breach of duties related to AI creation or deployment, products or services that incorporate AI, and representations made by chatbots or virtual agents. It also sweeps in statements and disclosures about AI use and regulatory actions related to AI. The term “Artificial Intelligence” is defined broadly—any machine-based system that infers from inputs how to generate outputs such as predictions, content, recommendations, or decisions. That’s not limited to large language models. A rules-based system that “infers” could be in scope.
Hamilton Insurance Group has taken a similar path with a Generative Artificial Intelligence Exclusion for professional liability. It removes coverage for claims “based upon, arising out of, or in any way involving” any actual or alleged use of “generative artificial intelligence” by the insured, and it names tools: ChatGPT, Bard, Midjourney, DALL-E. Other carriers are filing or rolling out exclusions built on ISO-style forms (e.g., Verisk’s CG 40 47 and CG 40 48 for design professional E&O). The trend is the same: lead-in language that ties exclusion to “arising out of” or “in any way involving” AI, plus definitions that capture a wide range of systems and uses.
“Arising out of” is the phrase that does the work. In insurance law it’s routinely read to require only a causal connection—not that AI be the sole or even primary cause of the loss. A claim that is “based upon” or “arising out of” AI can be excluded even when the core allegation is classic D&O or E&O territory: a board decision, a professional error, a fiduciary breach. If the facts involve AI somewhere in the chain, the carrier has a basis to deny. That’s why commentators have called these exclusions a sledgehammer rather than a scalpel.
Why Management Liability, and Why Now
Insurers are reacting to a risk they can’t yet price. AI shows up in securities disclosure (AI-washing), hiring and HR (algorithmic bias, résumé screening), professional advice (reliance on AI-generated analysis or drafting), and board oversight (failure to govern AI deployment). The SEC has already brought AI-washing cases against advisers and one public company. Employment claims tied to AI tools are a known exposure. Malpractice claims where a professional relied on an AI output that was wrong are a known exposure too (the Mata v. Avianca sanctions for citing fake ChatGPT-generated cases are the canonical example). From the carrier’s perspective, putting a broad AI exclusion into D&O and E&O shifts that uncertainty back to the insured. They don’t have to model every possible AI loss; they simply exclude the category and leave it to the market to develop affirmative products later. We’ve seen the same playbook with COVID-19, PFAS, and crypto: broad exclusions first, then gradual calibration and sometimes standalone coverage.
The twist is that cyber hasn’t followed suit yet. Cyber insurers are treating AI as an extension of existing threats—deepfakes, social engineering, AI-powered phishing—and keeping coverage in place, sometimes with explicit confirmations. The gap isn’t “no one will cover AI.” It’s that the policies many organizations rely on for management and professional liability are the ones where AI is being excluded. A firm that assumes “we have D&O and E&O” may discover that a claim involving AI is no longer covered, even though the underlying allegation is a classic breach of duty or professional negligence.
What Gets Caught in the Net
The practical effect is that a lot of ordinary-looking claims can be swept in. A discrimination suit because an AI résumé-screening tool disadvantaged a protected group? The claim “arises out of” the use of AI. A negligence or malpractice claim where the professional used an AI tool as part of the work? Same. A shareholder or derivative suit alleging the board failed to oversee AI risk or made misleading statements about AI capabilities? The exclusion is written to reach that too. Even a dispute that is mainly about contract or services could be argued to “involve” AI if the insured used AI somewhere in the process—for example, in drafting or reviewing the contract or in generating marketing materials. The carrier doesn’t have to prove AI caused the loss; it only has to argue that the claim is “based upon” or “attributable to” AI in some way. Policyholders are left to argue narrow construction and reasonable expectations of coverage, which is a weaker position than having no exclusion at all.
Design professionals and other E&O-heavy sectors are already seeing AI exclusions in professional liability forms. Carriers are asking more pointed questions at renewal about where and how AI is used. That’s partly underwriting and partly preparation for enforcing the new exclusions: if the application didn’t disclose AI use, the carrier may later argue misrepresentation or non-disclosure. The risk isn’t only “we have an exclusion.” It’s “we have an exclusion and we’ll use your application answers to support a denial or rescission.”
What to Do Before and At Renewal
The consistent advice from coverage counsel and brokers is to negotiate before a loss, not after. Where you can, push for removal of the AI exclusion. Where you can’t, try to narrow it: tighter lead-in language (e.g., “solely arising out of” or “directly and solely caused by”), a clearer and narrower definition of AI, and carve-backs for specific uses or for defense costs. You want to avoid “in any way connected to” or “in any way involving” if possible, because those phrases maximize the carrier’s ability to tie any claim that touches AI to the exclusion.
Map your AI footprint before you fill out the application or sit down with the broker. Know where AI is used—product, operations, marketing, HR, legal, finance—and for what. That lets you answer renewal questions accurately and reduces the chance of a later rescission or denial based on “you didn’t tell us.” It also lets you argue for carve-backs that match your real exposure instead of accepting a one-size-fits-all exclusion.
If you end up with a broad AI exclusion anyway, treat a future denial as contestable. Exclusionary language is construed against the drafter; courts don’t always enforce wording that would swallow the policy. A claim that is fundamentally about professional negligence or board oversight might still be argued to fall outside the exclusion if the link to AI is incidental. Don’t assume the carrier’s first denial is the last word. Get coverage counsel involved early when a claim emerges.
The Bottom Line
AI exclusions are moving into D&O, E&O, and management liability because insurers are unwilling to absorb unquantified AI risk on existing terms. The language is broad, and “arising out of” does a lot of work. Cyber, for now, is not the main battleground; the battleground is the policies that protect directors, officers, and professionals. Review your renewals for new AI endorsements, negotiate to narrow or remove them where you can, document your AI use so your application is accurate, and don’t treat a denial as final if the tie to AI is tenuous. The exclusions are real, and they’re already in the market. Treating them as someone else’s problem is the mistake.