Colorado's AI Act, the EU AI Act, and a growing number of state laws now require documented impact assessments for high-risk AI systems. Not recommendations. Requirements. And they're not one-time. Regulators and the laws themselves expect these assessments to be living documents: updated when the system is modified, when new risks emerge, or on a defined schedule.
That can sound like a quarter-long project per system. It doesn't have to be. Here's how to scope, conduct, document, and maintain an algorithmic impact assessment (AIA) that satisfies regulators without swallowing your calendar.
What Regulators Actually Want
An AIA in this context is a structured record of what the system does, who it affects, what risks it creates, and what you're doing about them. Colorado, the EU AI Act, and state frameworks vary in wording, but they converge on a few expectations. You need to describe the system and its purpose. You need to identify and assess risks, including discrimination, accuracy, security, and transparency. You need to document mitigations and ongoing monitoring. You need to show that the assessment was done in good faith and that it's current. "We did a one-pager in 2024 and never looked at it again" won't cut it. Neither will a 200-page treatise that nobody can use. The sweet spot is enough structure and evidence that an auditor or regulator can see what you considered and what you did, in a form you can actually keep up to date.
Step 1: Scoping
Don't try to assess "our AI" as a single blob. Scope to a defined system or use case: one deployment, one decision flow, one product feature that uses AI. "Resume screening model used by HR for initial candidate filtering" is scoped. "All our AI" is not. For each scoped system, nail down: what the system does (inputs, outputs, decision or influence point), who is affected (applicants, customers, employees, patients), what data it uses and where it comes from, and who owns it (accountable owner and team). If you've already classified use cases by risk (red, yellow, green), your high-risk and cautious use cases are your AIA candidates. Scoping keeps you from drowning in scope creep and gives you a clear unit to document and maintain.
Keep it bounded. If one model supports five use cases with different risk profiles, you can have one technical core documented and five use-case-specific impact sections, or five lighter AIAs that reference the same technical doc. Don't multiply work by documenting the same model five times with no linkage. One source of truth for the system; use-case-level addenda where impact or risk differs.
Step 2: Conducting the Assessment
This is where you identify and evaluate risks. Use a simple framework so you don't miss categories regulators care about.
Discrimination and fairness: Who could be disproportionately harmed? Consider protected characteristics (race, sex, age, disability, etc.) and whether the system's data, design, or outputs could create or amplify bias. Have you tested for disparate impact? What did you find? If you haven't tested yet, say so and state what you'll do (e.g., run a fairness evaluation by [date], document results, and update the AIA).
Accuracy and reliability: How accurate is the system on relevant metrics? What happens when it's wrong? Who is affected and how? Document performance on holdout or test data, any known failure modes, and how errors are detected and corrected in production.
Transparency and explainability: Can affected persons or oversight bodies understand how the system works and how it reached an outcome? What explanations or disclosures do you provide? If explainability is limited (e.g., black-box model), what mitigations do you have (human review, appeal process, logging)?
Security and safety: What could go wrong from misuse, attack, or failure? Prompt injection, data leakage, model extraction, or operational failure? What controls are in place? Brief is fine; "we use approved APIs with access controls and input validation; we do not expose model internals" is a valid answer if it's true.
Data and governance: What data feeds the system? Where does it come from? How is it maintained? Are there consent, retention, or data-quality issues? Document at a level that shows you've thought about it. You don't need a full data governance manual inside the AIA; you need a clear summary and a pointer to detailed docs if they exist.
For each risk area, record: what we looked at, what we found (or what we're still evaluating), what we're doing about it, and who's responsible. If a finding is open (e.g., "fairness testing planned for Q2"), say so and set a date to update. Regulators would rather see "we identified this gap and here's our plan" than a static doc that implies everything is done.
Step 3: Documenting
The output is a living document, not a one-off report. Structure it so that updates are straightforward.
Use a consistent template. Same sections across AIAs: system description, purpose, affected persons, data, risk assessment by category (discrimination, accuracy, transparency, security, data), mitigations, monitoring and review plan, and revision history. A template ensures you don't forget a category and makes it easy for auditors to compare systems. Keep the template in a place your team and legal/compliance can access.
Write for audit. Assume a regulator or auditor will read it. Be specific. "We tested for disparate impact by protected class and found no statistically significant difference" is better than "we considered fairness." "We retrain quarterly on data from the last 12 months" is better than "we keep the model updated." If you're relying on a vendor (e.g., for model cards or testing), say so and note what you've verified yourself versus what you're taking on representation.
Tie to evidence. Where you have test results, model cards, or third-party assessments, reference them. The AIA doesn't have to contain every chart, but it should point to where the evidence lives and summarize the conclusion. That way the AIA stays readable while remaining defensible.
Revision history: At the end (or in a standard section), keep a log: date, version, summary of changes, and who updated. When you update because the system changed or because you completed a planned action, add a new row. That log is how you show it's a living document.
Step 4: Maintaining
An AIA that's accurate at launch and stale a year later doesn't satisfy "living document" expectations. Build maintenance into the process.
Trigger on change. When the system changes in a way that could affect risk (new data source, new model version, new use case, new population), update the AIA. Define "significant change" for your context (e.g., retrain, new feature, new data type) and make updating the AIA part of the release or change process. No deploy of a significant change without an AIA update (or a documented decision that no update was needed and why).
Trigger on schedule. Even without a change, revisit high-risk AIAs at least annually. Review each section: is it still accurate? Have we completed any planned actions? Have new risks emerged? Update the doc and the revision history. A short checklist ("review accuracy metrics, fairness metrics, incident log, open findings") keeps the review from becoming a rewrite.
Assign ownership. One owner per AIA (or per system). That person is responsible for ensuring the AIA is updated on change and on schedule. Without a named owner, maintenance slips.
Integrate with governance. Link the AIA to your AI inventory, your risk classification, and your incident process. When something goes wrong, the AIA should be one of the places you look to see what was assessed and what might need to change. When you add a new high-risk system, an AIA (or a plan to complete one by a set date) should be part of the approval.
Algorithmic impact assessments are becoming mandatory for high-risk AI. They don't have to be monsters. Scope to defined systems, run a structured risk assessment, document in a consistent template with evidence and revision history, and maintain on change and on schedule. That's how you satisfy regulators without letting the process consume your quarter.
Need help scoping or conducting an AIA for a high-risk system? We do independent AI risk assessments and impact assessment support. Reach out.