How much of your NIST AI Risk Management Framework work carries over when the EU AI Act lands on your roadmap? A lot, but not everything. The frameworks were built in different contexts (voluntary U.S. guidance vs. binding EU law), and they use different language. Under the hood, the same ideas show up in both. Mapping them lets you reuse governance, risk processes, and evidence instead of building parallel programs.
Here's how the two line up and where the gaps are.
Where NIST and the EU Act Actually Overlap
NIST organizes risk management around four functions: GOVERN, MAP, MEASURE, and MANAGE. The EU AI Act doesn't name those functions, but its obligations for high-risk AI systems map onto the same lifecycle. Studies and crosswalks (including analyses from Trustible, GLACIS, and similar practitioners) consistently put the overlap in the 60–70% range. Your NIST work is the foundation; you still have to add EU-specific structure and proof. NIST compliance does not equal EU compliance.
GOVERN in NIST is about culture, accountability, and processes that run through the whole AI lifecycle. In the EU Act, that shows up as governance and quality management: who is responsible, how decisions are made, and how the organization ensures ongoing compliance. Articles 16 (obligations for providers) and 17 (quality management system) are the main hooks. If you've defined roles, policies, and review cadences under NIST GOVERN, you're building the same kind of structure the Act expects. The difference is that the Act requires it to be demonstrable and auditable. Document it in a way a conformity assessor or regulator can follow.
MAP is where you identify and contextualize AI systems and their risks. The EU Act doesn't use the word "map," but it does require that you know what you have, how it's used, and what could go wrong. Your NIST-style inventory and risk categorization (what systems exist, what data they use, what harms they could cause) directly support the Act's risk-based regime. The Act then adds its own taxonomy: prohibited, high-risk, limited, minimal. You'll need to slot your systems into those tiers, but the underlying work of "what do we have and where are the risks?" is shared.
MEASURE in NIST is about choosing metrics and methods to assess and quantify risk. In the EU Act, that appears as technical documentation, testing, and conformity assessment. You need to show that you've measured accuracy, robustness, bias, and security in ways that match the Act's requirements. If you've already defined metrics and run evaluations under NIST MEASURE, that's the same kind of evidence. You may need to reshape it into the format and level of detail the Act and its annexes expect, but you're not inventing the practice from scratch.
MANAGE is where you respond to risks and keep monitoring. The Act requires ongoing risk management, human oversight, and post-market monitoring. Your NIST MANAGE activities (prioritizing mitigations, updating risk assessments, tracking performance over time) align with Articles 9 (risk management system) and 14 (human oversight). Again, the EU text is more prescriptive about what "ongoing" and "documented" mean, but the behavior is the same: continuous risk management, not a one-time launch checklist.
One program, two lenses. Your NIST implementation gives you the processes and evidence; the EU Act asks you to present them in a specific legal and conformity-assessment frame.
What does the EU Act add that NIST doesn't?
NIST is deliberately non-prescriptive. The EU AI Act is not. Even with strong overlap, you'll have to add a few things that don't exist in the Framework.
Conformity assessment and CE marking: the Act requires that high-risk AI systems go through a defined conformity assessment (Annex VI internal control or Annex VII with a notified body) and, when applicable, CE marking. NIST has no equivalent. You need to design your documentation and quality management so they can feed into the chosen conformity route and, if relevant, support the technical file and declaration of conformity. That's net-new process and artifacts.
Incident reporting: the Act sets explicit obligations and timelines for reporting serious incidents and malfunctioning. NIST talks about monitoring and response; it doesn't specify regulatory reporting windows or formats. You'll need incident procedures that satisfy the Act's requirements and plug into your existing incident and risk processes.
Risk tiers and obligations: NIST doesn't classify systems into prohibited, high-risk, limited, or minimal. You have to run your inventory through the Act's annexes and determine which tier each system falls into, then apply the right set of obligations. Your NIST risk categorization informs that, but the final classification is an EU-specific step.
Penalties and enforcement: NIST has no enforcement. The Act does. Fines up to €35 million or 7% of global annual turnover for the most serious breaches, with lower but still significant amounts for other violations. That doesn't change what you build, but it changes the stakes and the need for defensible documentation.
Explicit accountability and transparency: the Act names "providers," "deployers," and sometimes "importers" and "distributors," with different duties. You need to know which role your organization plays for each system and document accordingly. NIST's GOVERN function supports that, but the Act's role-based obligations are more granular and legally binding.
Treat these as extensions to your NIST-based program, not a second program. Same governance, same risk lifecycle; add conformity, reporting, classification, and role-based documentation on top.
How to Use the Map in Practice
Treat NIST as your internal operating model and the EU AI Act as one of the external views you have to satisfy.
Single inventory, multiple views. Keep one inventory of AI systems (use case, data, context, intended users). From that, derive both your NIST risk view and your EU risk tier. When something changes, update the inventory once and then refresh both views. Avoid maintaining two separate "lists of what we have."
Evidence library, multiple consumers. Build an evidence library: risk assessments, test results, model cards, procedure docs, review records. Structure it so the same artifacts can support both NIST (e.g., Playbook actions) and EU (technical documentation, QMS records, conformity evidence). Tag or index by which framework(s) each artifact supports. When an auditor or assessor asks for something, you pull from the same library and format as needed.
One governance spine. Use one set of policies and roles for AI risk and governance. Document how that spine satisfies NIST GOVERN and how it satisfies the Act's governance and QMS requirements. You might have a NIST-oriented playbook and an EU-oriented conformity checklist, but they should reference the same underlying processes and owners.
Gap from the Act backward. Do a gap assessment from the EU AI Act: list the obligations that apply to your high-risk (and other) systems, then check which are already covered by your NIST implementation and which are not. The gaps are your EU-specific work: conformity route, incident reporting, tier classification, role mapping, and any extra documentation or process the Act requires. You're not redoing NIST work; you're filling in what's missing.
The One Trap to Avoid
Assuming that because the frameworks overlap, "we did NIST" is enough for the EU. It isn't. NIST gives you the right habits and a lot of reusable content; the Act adds mandatory structure, deadlines, and consequences. The other trap is assuming they're unrelated and building two separate stacks. They're not. Map them once, maintain one core program, and extend it for the Act. You'll do the work once and present it twice: in NIST terms for internal and U.S.-facing use, and in EU terms for conformity and enforcement.
We run AI risk assessments and compliance mapping for teams bridging NIST and the EU AI Act. Get in touch.