Back to Blog
AI GovernanceInventoryRisk Management

You Can't Govern What You Can't See: How to Build a Living AI Inventory From Scratch

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

What AI do you actually have? Not what you think you have. What's running in production, what's embedded in vendor tools, what's been spun up in a department you've never heard from. Most organizations answer with a spreadsheet from six months ago and a vague sense that it's incomplete. That's guesswork with a header row, not governance.

A living AI inventory is the foundation everything else sits on. Risk classification, compliance evidence, incident response, vendor due diligence: none of it works if you're still discovering systems during an audit. Here's how to build one that stays alive.

Why Spreadsheets Die

The classic approach is to send a survey. "List all AI systems your team uses." You get back a mix of the obvious (the chatbot everyone knows about), the aspirational (something they're piloting), and the forgotten (that Excel macro that calls an API). A few months later, someone deploys a new model. Someone else signs a contract for a vendor tool that uses AI. Nobody updates the sheet. By the time compliance or legal asks for the "current" inventory, it's a historical document.

The problem isn't laziness. An inventory maintained by human recall and manual updates will always lag reality. AI is proliferating in places that don't report to a central AI team: HR tools, marketing automation, customer support platforms, internal productivity apps. Shadow IT was a problem for cloud. Shadow AI is the same pattern, with higher stakes because the systems are making or influencing decisions.

You need a process that keeps the inventory close enough to reality that you can govern from it. A perfect snapshot is the wrong target.

What Belongs in the Inventory

There's no single standard, but every useful inventory answers a few questions for each system. Who owns it? What does it do? Where does it run? What data does it touch? How is it updated? And under what risk regime does it fall?

Ownership and discovery: for each system you need at least one accountable owner (team or person) and a way it was discovered. Was it declared? Found in a procurement review? Detected in a scan? That last one matters. If everything in your inventory came from self-reporting, you're missing things.

Function and context: a one-line description plus deployment context (internal vs customer-facing, human-in-the-loop vs autonomous, decision-support vs full automation). This drives both risk tier and who cares about it. A model that suggests which support ticket to open next is different from one that denies a loan.

Data and integration: what inputs does the system use (and from where)? What does it output, and who or what consumes it? Data lineage and integration points are where a lot of risk hides. A harmless-looking summarization tool that has access to PII is suddenly a compliance and breach-risk story.

Update and provenance: is this a fixed model or does it retrain? If it retrains, on what data and how often? If it's a vendor system, what do you know about their model and updates? You don't need to reverse-engineer every black box, but "we don't know" should be a documented answer that triggers a follow-up.

Risk classification: each system should have a risk tier aligned with your framework (EU AI Act categories, NIST AI RMF, or internal taxonomy). The classification determines what level of assessment and monitoring you need. This can be provisional at first. The point is to make the classification explicit and revisable.

None of this has to be perfect on day one. A living inventory improves over time. The mistake is demanding completeness before you start. Start with what you can confirm, mark the rest as unverified, and treat the gaps as the agenda for the next cycle.

Making It Living: Triggers and Loops

An inventory stays current when updates are triggered by real events, not by someone remembering to open a spreadsheet.

Procurement and vendor intake: any new contract or renewal that might involve AI (SaaS, APIs, embedded features) should route through a lightweight intake (product name, vendor, AI capability description, data access, risk owner). That entry becomes an inventory item. If your procurement process doesn't have a step for this, add one. It's the highest-leverage place to catch new systems.

Engineering and release pipelines: for in-house AI, tie inventory updates to the release process. New model or major model change? Inventory record created or updated before or at deploy. That can be a checklist in your existing change management or a required field in a deployment ticket. Make "update the inventory" part of "ship the thing," not a separate chore.

Periodic discovery: self-report and procurement won't catch everything. Run a periodic discovery pass: scan for API calls to known AI providers, review SaaS app lists for tools that advertise AI features, ask business units to confirm or correct what you have on file. Quarterly is a reasonable cadence for most organizations. The first pass will surface shadow AI. Later passes keep it from piling up again.

Regulatory and policy triggers: when a new regulation or internal policy lands, use it as a reason to refresh. "We need to classify everything for the AI Act" is a good moment to reconcile the inventory with reality and fill gaps. Treat compliance deadlines as inventory hygiene deadlines.

None of this requires fancy tooling at the start. A structured doc or lightweight database plus clear ownership and a recurring calendar invite can get you 80% of the way. The rest is discipline.

The Shadow AI Problem

Uncomfortable truth: if your inventory only contains what people volunteered, you have more AI than you think. Marketing is using generative tools for copy. Support is using a vendor chatbot. Finance may have a spreadsheet or workflow that hits an external API. None of that might be in your spreadsheet.

Build discovery into the process. That can mean surveys that ask "what tools does your team use that have AI or automation?" and "what would you use if you needed to generate or classify something quickly?" It can mean working with IT or procurement to run reports on licensed software and flag known AI-enabled products. It can mean scanning outbound traffic for calls to OpenAI, Anthropic, Google AI, and other providers (with appropriate privacy and security guardrails). You won't find everything. You'll find more than you have now.

When you do find something new, don't default to blame. Add it to the inventory, assign an owner, classify it, and decide what level of review it needs. Visibility, not prohibition. Once it's visible, you can govern it.

Tradeoffs and Nuance

A living inventory has costs. Someone has to own the process. Business units have to engage. You'll discover systems that need assessment and maybe remediation. That's the point. The cost of not having an inventory is larger: surprise findings in an audit, incidents involving systems nobody knew about, and governance that's built on a fiction.

One nuance: not everything that uses an API from an AI provider is the same risk. A tool that uses GPT to suggest email subject lines is not the same as one that uses a model to screen job applicants. Your inventory should support that distinction. Capture enough context (function, data, deployment) so you can triage. Don't try to treat every row the same.

Another: vendor systems are hard. You often don't know the model, the training data, or the update frequency. Document what you do know, what you've been told, and what you've asked for. "Vendor states they use a fine-tuned model; we have not verified" is a valid inventory entry. It's also a flag for due diligence or contract language in the next cycle.

Where This Gets You

A living AI inventory doesn't by itself make you compliant or secure. It tells you what you're governing. From there you can prioritize risk assessments, assign ownership, trigger reviews when systems change, and give auditors and regulators something that resembles the truth.

Start with the systems you already know. Add structure (ownership, function, data, risk tier). Plug in procurement and release triggers so new systems get added. Run a discovery pass and accept that the first one will be messy. Then repeat. The inventory that matters is the one that's still being updated in six months. Build for that.


We run independent AI risk assessments and inventory-driven review programs. Contact us to build your AI governance from the ground up.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review