Back to Blog
AI AccessIdentity GovernanceIAMNon-Human Identities

71% of Organizations Say AI Tools Access Core Systems Like Salesforce and SAP: But Only 16% Govern That Access

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

71% of organizations say AI tools access core systems like Salesforce and SAP. Only 16% govern that access. The 2026 CISO AI Risk Report put a number on something many security teams already suspected. Most organizations have AI agents or integrations touching CRM, ERP, and other systems that hold customer data, financials, and business-critical records. Most of them are not applying the same identity and access discipline they use for human users. That gap is a risk. An AI agent with broad, standing access to Salesforce can exfiltrate data, corrupt records, or propagate bad decisions at scale if it's compromised or misused. The fix isn't to block AI from core systems. It's to extend identity governance to non-human identities: just-in-time privilege, scoped API permissions, read-only defaults, and continuous monitoring for privilege drift. Same principles as human IAM. Adapted for the way AI agents work.

The Gap Isn't Just Awareness

When 71% say AI accesses core systems and 16% govern it, the gap isn't that the other 55% don't know. Many know. They just haven't applied governance. AI integrations were stood up for productivity or automation. Someone created an API key or a service account. The AI agent got the access it needed to do the job. Nobody asked what the minimum necessary access was, whether it should be read-only, or how often that access should be reviewed. You end up with AI agents with standing access to read and write Salesforce objects, SAP modules, or other systems. If that access is broad (e.g., full object access, all fields), the blast radius of a prompt injection, a compromised integration, or a bug is large. Identity governance for humans has learned this lesson: least privilege, time-bound access, and regular attestation. AI identities need the same treatment. They're not users in the traditional sense, but they're principals that authenticate and act on data. Treat them as first-class identities in your IAM and identity governance model.

Just-in-Time Privilege for AI Agents

Humans get just-in-time (JIT) access when you don't want them to have standing permission. They request access, get it for a limited window, and it expires. AI agents can follow the same pattern where the use case allows it. Instead of a long-lived API token or service account with broad rights, the agent gets short-lived credentials or scoped tokens that are issued when a task is authorized and revoked when the task completes or times out. Not every AI use case fits (e.g., a real-time assistant that needs to query CRM during a conversation may need a session-scoped token rather than per-request). But for batch jobs, scheduled syncs, or triggered workflows, JIT reduces the window where a compromised agent or leaked credential has access. Implement it by integrating the AI agent or the orchestration layer with your identity or secrets platform: issue a token with the minimum scope and the shortest practical TTL, and don't store long-lived credentials for the agent. Where JIT isn't feasible, the next best thing is scoped permissions and read-only by default.

Scoped API Permissions

Most core systems (Salesforce, SAP, and others) support granular API or OAuth scopes. The AI agent doesn't need full admin. It needs the minimum scope to perform its function. If the agent summarizes support cases, it may need read access to cases and related objects, not write. If it updates a field when a workflow completes, it may need write to that field or object only, not to the entire object model. Define a permission set or role per AI use case: "Agent X can read Cases and Contacts, no write." "Agent Y can write to Opportunity Stage only, read on Opportunity and Account." Then assign that permission set to the service account or the OAuth client the AI uses. Avoid one shared "AI integration" account with broad rights. One identity per agent or per use case, with scoped permissions, makes it easier to audit and to contain a compromise. Document the mapping: which agent, which system, which scope, and why. That's your access governance for AI.

Read-Only by Default

Default new AI integrations to read-only unless there's a documented need for write. Many use cases are read-only: the agent queries CRM to answer a question, pulls data to summarize, or reads records to support a recommendation. Write access (create, update, delete) should require a justification and a defined scope. If you start from "read-only unless approved," you shrink the attack surface and the risk of accidental or malicious data modification. When write is required, scope it to the minimum objects and operations (e.g., update one field, create one record type). Review read-only access too. Read access to PII or financials is still sensitive. But the default of read-only forces an explicit decision for any write path and makes standing write access the exception, not the norm.

Continuous Monitoring for Privilege Drift

Human IAM programs use access reviews and attestation to catch privilege creep. Users accumulate roles over time; reviews trim them back. AI identities drift too. A developer adds a scope to unblock a feature. A new use case gets the same service account as an old one and inherits more than it needs. Over time the agent has more access than anyone intended. Monitor. Track which identities (service accounts, OAuth clients, API keys) are used by AI agents and what permissions they have. Compare that to a defined baseline or to the documented use case. Alert when permissions are added or when an AI identity is granted access to a new system or object. Run periodic access reviews for AI identities the same way you do for high-privilege human accounts: "Does this agent still need this access? Is the scope still correct?" Revoke or narrow access that's no longer justified. Continuous monitoring doesn't require exotic tooling. It requires that AI identities are in your identity warehouse or IAM system, that you have a list of what they can do, and that you review and correct on a schedule (e.g., quarterly). Privilege drift for AI is the same problem as for humans. The fix is the same: visibility and regular review.

Inventory and Document AI Access

You can't govern what you can't see. As part of your AI inventory, capture which AI systems or agents access which core systems and with what level of access. For each agent, record: system(s) accessed (Salesforce, SAP, etc.), identity used (service account, OAuth client, API key), permissions or scope (read/write, objects, fields if applicable), and business justification or use case. That inventory feeds your access reviews and your risk register. When the CISO or auditor asks "what AI has access to our core systems?" you're not guessing. You're pulling from the same place you track human access to those systems, with AI identities included. If your identity governance tool supports non-human or machine identities, add them. If not, maintain the mapping in your AI inventory and link it to your IAM data. The goal is one view of "who and what has access to what," with AI agents as first-class principals.

Same Principles, Adapted

Human IAM and identity governance rest on least privilege, time-bound access where possible, separation of duties, and continuous review. AI agents are different in how they authenticate (tokens, service accounts, API keys) and how they're provisioned (often by developers or integration owners). They're not different in principle. They should have the minimum access they need. That access should be scoped, documented, and reviewed. Default to read-only; justify write. Prefer JIT or short-lived credentials where the use case allows. Monitor for drift and correct. The 71% who have AI in core systems and the 16% who govern it don't need a completely new playbook. They need to extend the one they already have to non-human identities. Close the gap there, and the number starts to move.


Assessing AI access to core systems and building identity governance for AI agents? We run independent AI risk assessments and governance program design. Get in touch.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review