June 30, 2026: Colorado's AI law (SB 24-205) takes effect. First state-level framework in the U.S. that explicitly regulates "high-risk" AI. Headquarters location is irrelevant. If your product affects Colorado residents the way the law defines, you're in scope.
"High-risk" is not "we use AI." It's a specific legal test. Miss it and you either over-scope everything or assume you're fine when you're not.
The Test: Consequential Decisions, Not "AI"
Under the Colorado AI Act, a high-risk AI system is one that makes, or is a substantial factor in making, a consequential decision about a consumer. Two things matter: what counts as a consequential decision, and what "substantial factor" means.
A consequential decision has a material legal or similarly significant effect on a person's access to or the terms of: employment, education, financial or lending services, essential government services, health care, housing, insurance, or legal services. The statute is deliberately broad. Not just "denying a loan." Any decision that materially affects access or terms in those domains. That can include screening, ranking, scoring, recommending, or filtering that leads to an outcome: hiring, admission, pricing, eligibility, coverage, housing offers, benefits.
"Substantial factor" is where product and legal teams get tripped up. A human can make the final call while your AI does the screening, scoring, or shortlisting. The system can still be a substantial factor. The law doesn't require the AI to be the sole decision-maker. If the AI's output is a meaningful input to a human decision that has one of those material effects, you're in the high-risk bucket. Resume screening, interview scoring, credit or insurance scoring, tenant screening, eligibility determination, triage or prioritization for services: all in scope when they affect Colorado consumers.
If the AI doesn't make or substantially influence a consequential decision, the high-risk obligations don't apply. Internal tools that don't affect consumer access or terms, or that only support non-consequential workflows, fall outside this definition. The line: does this output materially affect a person's access or terms in one of those eight areas? If no, you're not high-risk under this law.
No Size Exemption (For Now)
Small or mid-size companies often assume they're exempt. They're not. Under the version of the law that takes effect in June 2026, there is no employee-count or revenue exemption for high-risk AI. Proposals to exempt organizations under 250 employees or under a certain revenue threshold were debated in the 2025 legislative session but did not become law. A 50-person startup whose product uses AI to screen job applicants, set insurance premiums, or rank tenants for Colorado residents is subject to the same developer or deployer duties as a large enterprise. That may change in future sessions. As of today the law applies by use case, not by company size.
Developer vs Deployer: Two Roles, Different Duties
The Act splits obligations between developers (who build or substantially modify the high-risk system) and deployers (who use it to make or influence consequential decisions about consumers). You can be both. A company that builds its own underwriting model and uses it to approve or deny applications is both developer and deployer.
Developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. They must disclose such risks to the Attorney General and to deployers within 90 days of discovery or of receiving a credible report. They must publish a public statement describing the types of high-risk systems they develop and how they manage discrimination risks. They must give deployers the information and documentation needed to run impact assessments and meet their own duties. Vendors selling high-risk AI into employment, lending, housing, or the other covered domains are on the hook for risk disclosure, documentation, and that public statement, and for feeding deployers what they need to comply.
Deployers must run impact assessments, maintain risk management policies and programs, and review at least annually that the system isn't causing algorithmic discrimination. They must report discovered algorithmic discrimination to the Attorney General within 90 days. They must give consumers notice before an adverse consequential decision (or when the AI is used), and where feasible provide a statement of reasons, a way to correct inaccurate personal data, and an opportunity to appeal to a human. They also have transparency and documentation obligations. The organization that actually uses the AI to screen, score, or decide owns the consumer-facing process: notice, explanation, correction, and appeal.
For product teams, the takeaway is: figure out whether you're developer, deployer, or both. If you're a deployer, your roadmap needs impact assessments, risk management, annual review, notice and appeal flows, and documentation. If you're a developer, your roadmap needs risk disclosure, deployer-facing documentation, and the public statement. If you're both, you need both sets.
Algorithmic Discrimination and the 90-Day Clock
The law targets algorithmic discrimination: unlawful differential treatment or impact on consumers based on protected characteristics (e.g., race, color, religion, sex, national origin, disability, age, and other state and federal protected classes). Both developers and deployers must use reasonable care to avoid it and, if they discover it or get a credible report, notify the Colorado Attorney General within 90 days. That 90-day window is strict. Not "when we finish an internal investigation." Ninety days from discovery or from receiving a credible report. You need a clear process for what "discovery" means, who decides when the clock starts, and who is responsible for filing. Compliance with the Act's specified steps gives you a rebuttable presumption of reasonable care. Skimping on documentation or missing the 90-day window does not.
Timeline and Enforcement
The Act takes effect June 30, 2026. From then until June 29, 2027, the state is in an education-focused period: the Attorney General may enforce, but the emphasis is on getting people into compliance rather than maximum penalties. From July 1, 2027, full enforcement applies, including penalties of up to $20,000 per violation. Only the Attorney General can enforce; there is no private right of action. That doesn't reduce the need to comply. It just means plaintiffs' attorneys can't sue under this statute. Regulators can, and will, ask for documentation on impact assessments, risk management, disclosures, and consumer notice and appeal.
What to Do Next
Map your systems to the statutory test. For each AI system that touches Colorado consumers, ask: does it make or substantially influence a decision that has a material legal or similarly significant effect on access to or terms of employment, education, financial services, health care, housing, insurance, legal services, or essential government services? If yes, it's high-risk under this law.
Assign developer vs deployer. If you build and operate the system, you're both. If you're a vendor, you're a developer and your customers are deployers; your contracts and docs need to support their impact assessments and consumer flows. If you're only deploying someone else's system, you're a deployer. Get the documentation you need from the developer and own the impact assessment, risk management, notice, and appeal.
Impact assessments and risk management are mandatory. The Act doesn't treat them as nice-to-haves. They're part of the reasonable-care standard and the rebuttable presumption. Document what you assessed, what you found, and what you're doing to mitigate. Keep it current when the system or use case changes.
Design for notice and appeal now. If your product is a high-risk deployer use case, you need pre-decision (or at least pre-adverse-outcome) notice that AI is involved, plus statement of reasons, correction, and human appeal where feasible. That's product and UX work, not just legal. Building it in later is harder than designing for it from the start.
Colorado's law is a signal of where state-level AI regulation is heading. "High-risk" turns on impact on people's access and terms in specific life domains, not on model size or whether the AI is "important." If your product crosses that line for Colorado residents, June 2026 is when the obligations attach. The grace period is there to get ready, not to pretend the law doesn't apply.
We help teams map systems to the Colorado test, run impact assessments, and build risk documentation. Get in touch if you're building or deploying AI in employment, lending, housing, or other consequential domains.