Back to Blog
State AI LawsComplianceAI GovernanceMulti-StateRegulatory Patchwork

38 States, 100+ AI Measures: How to Build a Compliance Program When Every State Has Different Rules

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

The numbers tell the story: 38 states have adopted or enacted roughly 100 AI-related measures. Hundreds more bills are in the pipeline. Deepfakes, employment tools, healthcare algorithms, chatbots, government use—each state is picking its own targets and its own definitions. If you operate in more than a handful of states, "comply with each state separately" is a recipe for chaos. But "wait for federal preemption" is a bet that may not pay off soon, or at all. The practical move is to build a single compliance program that is designed for the patchwork: one set of controls, one evidence base, and a clear map of where state-specific obligations still bite.

Why "One Rule per State" and "One Federal Rule" Both Fail You

Treating every state as a separate compliance project doesn't scale. Different effective dates, different definitions of "high-risk" and "consequential decision," different notice and appeal rights, different filing or disclosure requirements. You end up with 20 versions of your impact assessment template and no way to reuse evidence. The other extreme, assuming federal law will sweep in and preempt everything, ignores the timeline. Congress has not passed a comprehensive AI law. Even if it does, preemption language may leave plenty of state law intact (employment, consumer protection, sector-specific rules). You need a program that works now and that can absorb new state laws without a redesign.

You need a baseline that satisfies the strictest common obligations, then layer on the handful of state-specific duties that genuinely differ: who gets notice, what gets filed, where appeals go, and when. Memorizing every state statute is the wrong move.

Start With the Strictest Common Denominator

Most state AI laws that affect the private sector share a small set of ideas: transparency (disclosure that AI is being used), accountability (someone owns the risk), fairness (bias and discrimination are in scope), and some form of impact or risk assessment for higher-stakes uses. The states that have gone furthest—California on employment and transparency, Colorado on high-risk systems and consumer rights, New York (and NYC) on bias audits—give you a working picture of what "strictest" looks like.

Use that as your baseline. If your program does the following, you're covering most of what the leading states require, and you're ahead of the rest:

Inventory and classify: you know which systems use AI and for what, and you classify by risk (e.g., red / yellow / green or equivalent) so that "consequential decisions" in employment, credit, housing, healthcare, insurance, and similar domains are explicitly high-risk. Impact assessments for high-risk systems: a documented assessment of what the system does, what data it uses, how you tested for bias and accuracy, and what safeguards are in place. Colorado, California, and others expect something in this family. Do it once, in a standard format; then excerpt or adapt for state-specific filings or disclosures where required. Notice and rights: where AI is used in decisions that affect people (hiring, credit, benefits, etc.), individuals get notice and, where the state requires it, explanation, correction, or appeal. Build this into your product and process design so you're not bolting it on per state. Governance and documentation: policies, risk ownership, and audit trails. Many state laws don't specify a particular framework, but they do expect you to be able to show what you did and why. A single governance structure (e.g., aligned to NIST AI RMF or your own taxonomy) gives you one place to document and one story for auditors and regulators.

None of that is state-specific. It's the core of a program that travels.

What still varies by state?

The patchwork still forces a few distinct kinds of work. You can't ignore these; you can contain them.

Effective dates: Colorado's comprehensive law kicks in mid-2026. California's employment-related rules have their own deadlines. NYC's bias audit law is already in effect. Your program should have a simple state/obligation/effective-date tracker so you know when each obligation attaches. That's a small table, not 38 separate programs.

Definitions: "High-risk," "consequential decision," "algorithmic discrimination," and "deployer" vs "developer" don't mean the same thing everywhere. Your internal risk classification (e.g., red / yellow / green) should be stricter than the loosest state definition. If your red tier includes everything any state calls high-risk, you're safe. You then only need to know, for a given state, whether you have systems in that state's high-risk bucket so you can trigger state-specific steps (e.g., filing, disclosure, or appeal procedures).

Filing and disclosure: some states require notices to the AG, public summaries of bias audits, or registration of AI in certain uses. These are one-off or periodic tasks. Track them in the same place as your other compliance calendar; assign owners; use the same underlying evidence (your impact assessments, audit results, and policies) to populate them. The work is "what do we file and where," not "build a new program for this state."

Penalties and enforcement: ranges vary. Some states cap at a few hundred thousand, others go into the millions; some have no private right of action, others do. Your risk and legal teams should know which states you're in and which have the teeth. That informs prioritization and insurance, not the design of your core controls.

One Evidence Base, Many Outputs

The biggest leverage is a single evidence base. One set of impact assessments, one inventory, one control set, one policy set. When a new state law goes live, you're not building from zero. You're answering: Does this state require something we don't already do? If yes, what's the delta (e.g., a different notice, a public summary, a filing)? Then you produce the output from the evidence you already have.

Example: You already run bias testing and document it for NYC and similar obligations. Colorado wants impact assessments and risk management programs. Your existing assessment and governance docs, plus a mapping to Colorado's concepts (e.g., "consequential decision," "algorithmic discrimination"), become the backbone of your Colorado response. You may need to add a consumer-facing appeal process or a specific statement of reasons; you're extending, not replacing.

Same for audits. If you have an annual review of high-risk AI (accuracy, bias, drift, incidents), that review can feed multiple state requirements—different states may want different slices (e.g., public summary vs. regulator-only). One review, many reports.

NIST AI RMF as a Travel-Friendly Backbone

The NIST AI Risk Management Framework isn't law anywhere in the U.S., but it's a useful organizing structure. Govern, Map, Measure, Manage map well to what states are asking for: governance and policy (Govern), inventory and context (Map), testing and monitoring (Measure), and response and improvement (Manage). Some states (e.g., Colorado, Texas) reference conformity with recognized frameworks as a factor in enforcement or as an affirmative defense. Using NIST AI RMF (or something aligned to it) gives you a consistent way to describe your program and to show that you're not ad hoc. When a new state law lands, you're not inventing a new framework; you're checking which of your existing RMF-aligned activities satisfy the new requirement and what's left to add.

The Program You Build Is the One You Keep

The real test of a multi-state AI compliance program is whether it's still running in two years. If it depends on 38 different playbooks and 38 different evidence stores, it won't be. If it's one program—one risk tiering, one assessment format, one governance and doc trail—with a thin layer of state-specific triggers and outputs, you can add state 39 and 40 without a redesign. You're not pretending the patchwork doesn't exist; you're building so the patchwork doesn't own you.


We design AI compliance programs that hold up across the state patchwork. Contact us for independent AI risk assessments and governance program design.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review