A few years ago you could still get cyber coverage by answering a short application and paying the premium. Then ransomware blew up. Loss ratios spiked; some carriers paid out more in claims than they took in. The market didn't just raise prices. It changed the game. Underwriters started demanding proof: MFA everywhere, EDR on every endpoint, immutable backups, patch cadence, incident response plans. The application became a control checklist. No evidence, no quote (or no coverage for the risks they cared about).
That playbook is now being applied to AI. Carriers and regulators are turning AI governance into an underwriting input. If you can't show inventory, policy, and some form of risk oversight, you're either facing steeper terms, broader exclusions, or a harder conversation at renewal. Governance isn't just compliance theater anymore. It's becoming a prerequisite for insurance.
How Cyber Underwriting Actually Changed
The shift in cyber wasn't subtle. Through 2019–2021, ransomware and business email compromise drove claim severity and frequency to levels that made many portfolios unprofitable. Carriers that had been writing on the basis of revenue and industry started asking what controls were in place. Multi-factor authentication went from "recommended" to "required." Endpoint detection and response replaced "we have antivirus" as the baseline. Backups had to be offline or immutable, with test restores. Applications got longer. Underwriters began asking for screenshots, console exports, and evidence that controls were actually deployed, not just that someone had bought the tool.
The logic was straightforward. You can't price an unknown. If you don't know whether the insured has MFA or whether their backups are restorable, you're guessing. Once losses hit, the only levers were price, sublimits, and increasingly eligibility. The market moved to control-based underwriting: define the controls that matter, ask for evidence, and tie terms to what you see. Today, a large share of SMB cyber applications function like mini audits. Rejection rates for first-time applicants climbed into the 40% range. The ones that get through often do so because they can document the controls the carrier expects.
AI is following the same arc. The exposure is hard to model (correlated failures, fast-changing use cases, liability theories still in flux). Carriers aren't waiting for a decade of loss data. They're asking what you're doing to govern AI, and they're starting to tie that to what they're willing to cover and at what price.
What Carriers and Regulators Are Asking For
The ask isn't uniform yet, but the direction is clear. Regulators are ahead of the curve. New York's Department of Financial Services, in Circular Letter No. 7 (2024), told insurers using AI in underwriting and pricing that they need governance frameworks: board and senior management oversight, written policies and procedures, model risk management, training, independent risk assessment, and vendor oversight. Colorado's Division of Insurance has expanded requirements so that insurers using external data and algorithms in auto and health must have governance and risk management frameworks in place, with interim reporting and a compliance timeline. The NAIC's model bulletin on AI has been adopted in roughly two dozen states, emphasizing safety, fairness, accountability, and transparency, and the need to document how you achieve them.
When insurers are required to have that in place, the next step is for them to ask the same of their insureds. If you're a company buying coverage (general liability, tech E&O, cyber, D&O), underwriters are increasingly interested in how you use AI and how you manage the risk. The questions are still evolving, but they map to the same themes: Do you know where AI is used? Do you have a policy? Do you assess high-risk use? Do you have incident response for AI? Can you show it?
Concretely that means: an AI inventory (or at least a credible description of use cases), an acceptable-use or AI policy that's actually published and referenced, some form of risk classification (what's high risk, what's not), and for high-risk use something that looks like an impact assessment or review. It doesn't mean you need a 50-person AI ethics team. It means you need to be able to answer "how do you govern AI?" with more than "we're working on it."
Why This Feels Like Cyber All Over Again
The parallel isn't accidental. In both cases the carrier faces an exposure that's hard to bound. In cyber, the question was whether the insured would get hit and how bad it would be; controls gave a proxy for that. In AI, the question is whether the insured's use of AI will produce a claim (discrimination, defamation, IP, bodily injury, bad decisions) and how severe. Without a way to distinguish between "we use ChatGPT for internal drafts" and "we use an unvetted model to make hiring or credit decisions," the underwriter has to assume the worst or exclude the risk entirely. Governance gives a way to differentiate. Organizations that can show inventory, classification, and some form of oversight look less like black boxes. They're not zero risk, but they're easier to underwrite.
There's another echo. In cyber, the market moved from "do you have a firewall?" to "show us your EDR coverage, your backup test report, your IR tabletop log." The ask moved from attestation to evidence. The same is starting for AI. "Do you have an AI policy?" is giving way to "Can you show us how you classify use cases, and do you have completed assessments for high-risk applications?" Carriers that have been burned by silent cyber, or that see AI as the next silent exposure, are inclined to get ahead of it. That means either excluding AI (as with the new ISO generative-AI endorsements) or conditioning coverage and terms on governance. The second path only works if they can see what you're doing.
What to Have in Place Before Renewal
If you want to be on the right side of that conversation, a few things matter.
Know your footprint. You need a working view of where AI is used: product, marketing, HR, legal, operations, vendors. It doesn't have to be perfect. It has to be good enough to describe to an underwriter and to know where the high-risk use lives. If you can't say what's high risk and what isn't, you can't ask for carve-backs or argue for narrower exclusions.
Have a policy and point to it. An AI acceptable-use or governance policy that says what's in and out of bounds, and that's actually in use (referenced in procurement, onboarding, or release), is the baseline. It doesn't need to be 50 pages. It needs to exist and be findable. Underwriters are starting to ask for it.
Do something visible for high-risk use. For applications that affect people (hiring, credit, eligibility, content that reaches third parties), have a process. That might be an impact assessment, a review, or a sign-off. The point is to show that high-risk AI isn't ungoverned. That's what gives a carrier a reason to offer terms instead of a blanket exclusion.
Connect it to the evidence you already keep. If you have a risk register, include AI risks. If you have an incident response process, extend it to AI (or document how AI incidents are handled). If you do control testing or audit, make sure AI is in scope where it matters. Governance that lives in a single "AI compliance" deck is easy to forget. Governance that's wired into the same risk and control language you use for everything else is easier to maintain and to show.
Treat the application and renewal as a deadline. When the application asks about AI use or AI governance, answer accurately. Vague or optimistic answers can support a denial or rescission later. If you don't have something yet, say what you're doing and by when. Underwriters would rather see a credible plan than a blank or a promise that doesn't match the rest of your posture.
The Tradeoff Carriers Are Making
Carriers are not all moving at the same speed. Some are attaching broad AI exclusions and calling it a day. Others are using exclusions but leaving room to negotiate carve-backs or sublimits for insureds that can describe and demonstrate governance. The latter group is effectively saying: we'll cover some AI risk if we can see how you're managing it. That's the same bargain that emerged in cyber: better terms for better controls, and evidence that the controls are real.
For insureds, the implication is that governance isn't just for the regulator or the board. It's for the carrier. The better you can show what you're doing (inventory, policy, classification, oversight), the more likely you are to get a real conversation about coverage, exclusions, and price instead of a take-it-or-leave-it form. AI governance as an insurance prerequisite is still taking shape. The playbook is the one cyber wrote: controls and evidence first, then coverage. Get ahead of it.
Preparing for AI-related underwriting questions or renewals? We run independent AI risk assessments and help design governance programs. Contact us.