Your best-engineered employee goes through background checks, role-based access reviews, and quarterly attestation. Your LLM agent gets whatever the service account had when someone wired it up last quarter. That asymmetry is the access control gap: AI systems routinely operate with more permissions than any single human, and we've been slow to treat them as first-class principals in the authorization model.
The gap shows up in two ways. Either the agent inherits the calling user's context (so it can do anything that user can do, across every system that user can touch), or it runs as a shared service identity with broad, standing access. Both patterns violate least privilege. Both create blast radius when the agent is misused, compromised, or simply wrong. Locking AI out of core systems isn't the answer. Give AI its own identity and authorization story—fine-grained RBAC, sensitivity-aware data access, and policies that constrain what the agent can do at runtime, not just at design time.
The Inheritance Problem
When an AI agent acts "as" the user, it gets the user's effective permissions. That sounds reasonable until you consider scope. A human doesn't simultaneously have every document, mailbox, and CRM record open. An agent, in a single session, can be instructed to search, summarize, or act across all of it. The agent has a superset of what the user would normally use at once. Add prompt injection or a malicious third-party plugin, and that superset becomes the attack surface. The alternative—a dedicated service account for the AI—often goes the other way: one account, many use cases, broad rights so that nothing "breaks." That account becomes the highest-privilege principal in the system, with no role-based boundary and no per-conversation or per-task scope.
Recent incidents underscore the pattern. In the Moltbook breach, a Supabase production database had row-level security disabled; an API key in client-side code allowed access to 1.5 million tokens and private agent conversations. The failure wasn't only misconfiguration. Agents had implicit trust and could call any tool and access any resource without authorization checks. In the ServiceNow "Now Assist" vulnerability, a prebuilt agent had permission to "create data anywhere in ServiceNow" with no scoping or approval workflow. Attackers could use the agent to grant themselves persistent admin access. In both cases, the agent was treated as a trusted extension of the environment rather than a principal that needed its own least-privilege boundary.
Fine-Grained RBAC for AI: Beyond Coarse Roles
Traditional RBAC doesn't map cleanly onto agents. Roles are static; agent tasks are dynamic. A single "AI integration" role that can read and write across a data plane is exactly what we're trying to avoid. What's needed is RBAC that's both identity-aware and context-aware: which agent, which conversation or task, which resource, and under what conditions.
That means moving from one role per agent to policies that constrain specific actions and resources. For example: "This agent can read Cases and Contacts in Salesforce; it cannot write." "This agent can update Opportunity Stage only when the amount is under a threshold and the destination is internal." That's a blend of role-based boundaries (which systems and objects) and attribute-based rules (amount, destination, context). Research and vendor work has converged here. Frameworks like Progent use a domain-specific language to express privilege policies over tool calls and enforce them during execution without changing agent internals. Policy gateways (e.g., Cerbos, MCPermit, Aperture) sit in front of agent tool calls and evaluate each request against declarative policies—authenticating the agent, checking authorization, and auditing the result. The agent gets short-lived tokens with embedded claims (org, tenant, agent_id, role, scopes) so that downstream systems can make granular allow/deny decisions.
The nuance is time. Human RBAC often assumes standing membership in a role. For agents, time-bounded permissions reduce exposure. Issue tokens with a one-hour TTL. Scope access to the current conversation or task. Where possible, require human approval for sensitive operations (payments, bulk deletes, permission changes). That way the agent doesn't carry broad, long-lived rights by default.
Sensitivity Labels and AI-Accessible Data
Even with RBAC in place, the agent may be allowed to touch data that shouldn't be summarized, extracted, or sent to an LLM. Sensitivity labels and encryption usage rights determine what the AI can actually do with that data.
Microsoft Purview's approach is instructive. Copilot and agents honor sensitivity labels and usage rights across Microsoft 365. The critical permission for "can the AI use this content?" is EXTRACT (often shown as "Copy and extract content"). If a user has VIEW but not EXTRACT on encrypted content, Copilot won't summarize it; it can only reference it with a link so the user can open it outside the AI. You can label documents and emails, apply encryption with custom usage rights, and block extraction for the most sensitive items. Copilot won't process files where user-defined permissions block extraction. When Copilot creates new content from labeled items, the new content inherits the highest-priority sensitivity label and its protection—so generated output stays classified. Double Key Encryption (DKE) data is out of scope entirely: Copilot and agents can't access it, which is by design for the strictest data.
There are limits. Container labels (on Teams groups or SharePoint sites) aren't inherited by items in those containers for Copilot, so channel chat or site content may not carry the container's sensitivity context into the AI. Sensitivity labels on Teams meetings and chat aren't yet recognized by Copilot in the same way. And if you use Graph Connectors or plugins to pull in external data, sensitivity labels and encryption from those sources may not be enforced in Copilot Chat. Sensitivity labels give you a strong lever for "what can the AI read and reuse?" but you have to know where the boundaries are and supplement with DLP and access control.
A practical takeaway: configure labels so that high-sensitivity content does not grant EXTRACT to the audiences that use Copilot, or use DLP policies to prevent Copilot from summarizing specific labeled files and emails. You can also use the PowerShell setting that prevents Office from sending content to connected experiences (including Copilot) for chosen labels. That way, the same classification scheme you use for humans governs what the AI can see and extract.
Why Purview's Model Matters
Purview ties together identity, classification, and usage rights so that Copilot's behavior is constrained by policy, not only by prompt or app design. The agent doesn't get to decide what's sensitive. The label and the usage rights do. That's the right direction: authorization and information protection should be enforced in the stack, not only in the prompt. The same idea applies beyond Microsoft 365. Any environment where an LLM or agent touches enterprise data should have an answer to "what can this principal access?" and "what can it do with that data?"—RBAC for the first, sensitivity and usage rights (or their equivalents) for the second.
Audit matters too. Purview records Copilot interactions for compliance and discovery. You get visibility into what was searched and what was returned (with full prompt/response content available via eDiscovery or DSPM for AI). So you can detect oversharing, prove compliance, and respond to incidents. Without that, the agent is a black box with broad access.
Closing the gap
Closing the access control gap means treating the LLM or agent as a principal with its own identity and least-privilege scope. Give it dedicated credentials or tokens, not the user's full context or a shared god-account. Apply fine-grained RBAC: per-agent or per-use-case roles, attribute-based rules for risky actions, short-lived scoped tokens. Use sensitivity labels and usage rights so the AI can't extract or summarize data that shouldn't leave the classification boundary. Enforce that in the platform (as with Purview) and in policy gateways in front of tool use. And audit. Review AI identities in the same access reviews you use for high-privilege humans; monitor for new permissions and new data sources. The point: the agent shouldn't operate with more permissions than any employee, and when it touches sensitive data, the same controls that protect that data from people should apply to the agent.
Assessing AI access controls or designing RBAC and sensitivity strategies? Reach out for independent AI risk assessments and governance program design.