Back to Blog
NIST AI RMFMITRE ATLASISO 42001AI GovernanceThreat Model

NIST AI RMF, MITRE ATLAS, and ISO 42001: Choosing the Right AI Security Framework for Your Threat Model

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

The real question isn’t which framework to adopt—it’s which one does the job your threat model actually demands? And how to plug it into what you already run. NIST’s AI Risk Management Framework, MITRE ATLAS, and ISO/IEC 42001 are the three most cited. They answer different questions. Use the wrong one as your primary lens and you either over-document without closing adversarial gaps, or you harden tactics without a governance story that auditors and regulators can follow.

How they differ, where they line up, where they fall short, and how to wire them into your existing program instead of starting over.


What Each Framework Is Actually For

NIST AI RMF is a governance and risk-management scaffold. Its four functions—Govern, Map, Measure, Manage—are about how your organization thinks about, assigns ownership of, and tracks AI risk. It’s voluntary, sector-agnostic, and deliberately light on prescribed controls so it can sit on top of whatever you already do. NIST has published crosswalks to ISO 42001, SOC 2, NIST CSF, and others, which tells you the intent: this is the structure that other frameworks and controls hang on. The 2024 Generative AI profile extends that structure into gen AI–specific risks. The RMF does not, by itself, tell you which threats to prioritize or how to defend against a specific attack technique. It tells you how to run the process.

MITRE ATLAS is a threat and adversary model. It names tactics, techniques, and sub-techniques that attackers use against AI/ML systems—data poisoning, model theft, adversarial examples, prompt injection–style behaviors—and ties them to phases of the ML lifecycle (data collection, model development, deployment, monitoring). It’s the same kind of artifact as the MITRE ATT&CK matrix, but for AI systems. So its job is: “Given our AI assets and data flows, what are the concrete TTPs we should design controls and tests for?” Roughly 70% of ATLAS’s documented mitigations map to controls you likely already have (access control, integrity checks, monitoring). That makes it practical for security teams: you’re not inventing a parallel universe of controls, you’re aligning existing ones to an AI threat model.

ISO/IEC 42001 is the first international management-system standard for AI. It specifies requirements for an AI Management System (AIMS): context of the organization, leadership, planning, support, operation, performance evaluation, improvement. You can get certified by an accredited body; certificates typically run three years with surveillance audits. So its job is: “Demonstrate to customers, partners, or regulators that we have a repeatable, auditable system for managing AI.” It’s about having the system and proving it—not about the technical details of a given model or the exhaustiveness of your adversarial coverage. It’s process- and governance-oriented, not a threat taxonomy.

In one sentence: NIST AI RMF structures how you govern and manage AI risk; MITRE ATLAS tells you what adversaries do and how to mitigate it; ISO 42001 gives you a certifiable management system. They’re complementary. The mistake is treating them as substitutes.


Where They Overlap (and Where That Helps)

Governance and risk show up in all three, in different forms.

NIST’s Govern and Map functions overlap with ISO 42001’s requirements for context, leadership, risk assessment, and risk treatment. NIST has published an official crosswalk between the AI RMF and ISO 42001. If you implement the RMF’s governance and mapping well, a lot of the work for 42001 is already done—roles, policies, risk process, and lifecycle thinking. Many organizations use the RMF as the design blueprint and 42001 as the certification target.

ATLAS sits in the “operational controls” layer. Its mitigations—access control, logging, validation, guardrails—are the kind of things that satisfy “we have controls over the AI lifecycle” in both NIST (Manage) and ISO 42001 (operation and performance evaluation). So your threat model and red-team work (ATLAS) can feed the evidence that your governance (RMF) and management system (42001) are actually addressing real attack paths. The overlap: one set of controls, multiple frameworks. You don’t need three separate control sets; you need one coherent set that you describe in RMF terms, map to ATLAS mitigations, and document for 42001.


Where the Gaps Are

None of the three is sufficient alone.

NIST AI RMF has been criticized for under-specifying model governance and the realities of constantly retrained or “disposable” models—spam filters, recommenders, fraud models—where data drift and retraining are the norm. It also promotes goals like explainability and robustness without much guidance on the tradeoffs: explainability can leak information useful to attackers, and some robustness measures can interact poorly with security. So the RMF gives you the process to discuss those tradeoffs; it doesn’t resolve them for you.

MITRE ATLAS tells you what to defend against and how in technique terms. It doesn’t tell you how to govern, how to set risk appetite, or how to satisfy a regulator or auditor that you have a system. It’s a threat and control lens, not a governance or certification framework. For pure adversarial coverage, you’d still pair it with something like OWASP’s Top 10 for LLM Applications for application-level vulnerabilities; ATLAS is stronger on ML pipeline and model-centric threats.

ISO 42001 certifies that you have a management system in place. It does not certify that your AI is safe, fair, or resilient to specific attacks. It doesn’t mandate technical measures for adversarial robustness or real-time evidence. Regulators (e.g., under the EU AI Act) are moving toward continuous, evidence-based compliance; 42001’s periodic audits don’t by themselves guarantee that. Shadow AI—models or APIs brought in without going through the AIMS—is a real risk that 42001’s structure can reduce but not eliminate. So 42001 is a necessary piece of “we take AI seriously” proof; it’s not a substitute for threat modeling and technical controls.

The gap that matters most in practice: adversarial coverage (ATLAS) + governance and risk process (NIST) + auditable, certifiable system (42001) are three different needs. You need at least two of the three, and often all three, depending on your threat model and compliance demands.


Mapping to Your Existing Program (Without Starting From Zero)

Start from your threat model and your compliance needs, then plug in the frameworks.

If your main concern is “we have no structured way to govern or talk about AI risk” — start with NIST AI RMF. Use Govern and Map to assign ownership, define context, and identify risks. Your existing risk and governance (SOC 2, NIST CSF, internal policy) already give you a base; NIST’s crosswalks show exactly where the RMF attaches. You’re extending the same governance language to AI, not building a second program.

If your main concern is “we don’t know which attacks to design for” — use MITRE ATLAS. Run threat modeling along the ML lifecycle (data, training, deployment, monitoring), pick the tactics and techniques that apply to your systems, and map ATLAS mitigations to your current controls. Many of those mitigations will already exist (identity, logging, change control); the lift is mapping and, where needed, adding AI-specific controls (e.g., input validation, model integrity). ATLAS Navigator and other MITRE tools can support threat modeling and red-team planning.

If your main concern is “we need a certificate or a clear story for customers/regulators” — aim for ISO 42001, but build the content using NIST AI RMF and ATLAS. Use the RMF to design governance and risk process, use ATLAS to justify and document operational controls, and use 42001 as the audit lens. That way the certificate reflects both process and threat-informed control selection.

If you already have SOC 2 or NIST CSF: Your governance, risk assessment, and monitoring patterns already align with the RMF. Add an AI system inventory, AI-specific risk categories, and Map/Measure/Manage activities for those systems. Layer ATLAS where you need adversarial coverage. If you later pursue 42001, the RMF–42001 crosswalk and your existing documentation will shorten the path.

A practical order of operations: (1) Define the scope of AI systems and ownership (RMF Govern/Map). (2) Threat-model those systems with ATLAS and tie mitigations to existing controls. (3) Document the resulting process and controls in a way that satisfies RMF Manage and, if needed, 42001. You’re not choosing one framework; you’re using each for the job it’s good at and connecting them through a single, coherent set of controls and artifacts.


The One Thing to Get Right

The one thing to get right is clarity on what you’re defending against and what you’re proving. If you only adopt NIST AI RMF, you risk a great process that doesn’t explicitly cover adversarial TTPs. If you only adopt ATLAS, you risk strong tactical coverage with no governance story. If you only adopt ISO 42001, you risk a certificate that doesn’t guarantee resilience to real attacks or real-time regulatory expectations. Match the framework to the question: governance and risk structure (RMF), adversary and technique coverage (ATLAS), or auditable management system (42001). Then wire them together so your threat model, your controls, and your proof are the same story—not three separate ones.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review