Back to Blog
AI PolicyGovernanceEnablementISACA

60% of Employees Use AI at Work: Only 18% Know Their Company Has a Policy

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

About 60% of employees say they use AI at work. Only 18% know their company has an AI policy. The policy exists. The workforce doesn't know it. Adoption has already outrun governance, and the gap isn't a communication problem alone. It's structural. Discovery without enforcement doesn't change behavior. Enforcement without enablement pushes use underground. Closing the gap means a phased program that starts with visibility and ends with structured enablement, not the other way around.

When Discovery Isn't Enough

Many organizations start by trying to see what's out there. They run surveys, check CASB logs, map shadow AI. They get a list. Then nothing much changes. People keep using the same tools because nobody told them to stop, nobody gave them an alternative, and the policy document lives on the intranet nobody reads.

Discovery alone fails for a simple reason: it doesn't create a decision or a consequence. You know that 40% of your teams are hitting unapproved AI endpoints. So what? Unless someone is accountable for following up, unless there's a clear "do this instead," and unless the organization treats policy as real, the list just sits there. Discovery is necessary. It's not sufficient. You need a path from "we found it" to "we've done something about it," and that path has to include both clarity (what's allowed, what isn't) and enablement (what to use instead, how to get it).

When Enforcement Without Enablement Backfires

The opposite mistake is to crack down without offering a path forward. You announce that unsanctioned AI is prohibited. You block domains or threaten consequences. Usage doesn't disappear. It goes underground. People use personal devices, home networks, or tools that aren't yet on the block list. You've increased shadow AI, not reduced it, because you've given people a reason to hide what they're doing and no approved way to do it.

Enforcement without enablement assumes that people will stop needing AI if you tell them to. They won't. The work that drove them to use ChatGPT or a random summarizer is still there. They find another way. You want to channel use into tools and use cases you've assessed, approved, and can govern. That requires a positive offer: here's what you can use, here's how to get it, here's what's in and out of bounds. Without that, enforcement just displaces risk and makes it harder to see.

The Visibility-First Phase

Closing the gap starts with visibility, but visibility with a purpose. You're not just building a list. You're building the basis for decisions and communication.

Inventory what's in use. Use the channels you have: CASB, SSO, network logs, surveys, and conversations with business units. Build a picture of which AI tools and use cases are live today. Classify by risk and data sensitivity. You need to know what you're governing before you can tell people what's allowed.

Identify ownership. For each use case or tool, know who the logical owner is (team, department, or function). Ownership isn't for blame. It's for "who do we talk to when we need to sanction, replace, or restrict?" Without ownership, visibility is a report that nobody acts on.

Map policy to reality. Compare what your policy says (if you have one) to what's actually happening. Where are the gaps? Where is the policy vague or silent? Where would employees have no way to know the policy applies to what they're doing? That map tells you what to clarify and what to communicate first.

This phase doesn't require a finished policy or a full enablement catalog. It requires enough clarity to stop pretending that "we have a policy" is the same as "people know it and have somewhere to go."

The Enablement Phase

Once you have visibility and ownership, enablement is what makes the policy stick. Enablement means giving people approved options and a clear process, not just a list of don'ts.

Define and publish approved use. Decide which tools and use cases are in scope for sanctioned AI. That might be a short list of enterprise contracts (e.g., a specific LLM API or vendor product) plus clear use cases (e.g., summarization of non-sensitive content, code assistance under certain rules). Publish it where people look: intranet, onboarding, team wikis. Make it easy to find. "We have a policy" only works when "here it is and here's what you can use" is one click away.

Create a path to get access. If the approved option requires access, license, or training, make the path obvious. A form, a ticket, a single point of contact. The friction of "I don't know how to get the right tool" should be lower than the friction of "I'll just use the free thing." If it isn't, shadow AI wins.

Train and communicate in context. One all-hands announcement isn't enough. Tie policy and enablement to moments that matter: onboarding, project kickoffs, security awareness, and team meetings. Explain what's allowed, what's not, why, and where to go for the approved option. Use the visibility data: "We know many of you use AI. Here's our stance and here's how to do it safely." That message lands differently than a generic "please read the policy."

Revisit and expand. Start with a narrow set of approved use cases if you need to. Add more as you assess tools and negotiate terms. Move the bulk of legitimate use into the sanctioned column over time, so that what remains unsanctioned is the exception you can triage, not the norm you've given up on.

Why Order Matters

Visibility first, then enablement, then enforcement that's consistent and clearly tied to the approved path. If you enforce before you've built visibility, you're punishing in the dark. If you enforce before you've enabled, you're pushing use underground. If you never enforce, the policy is optional and the enablement work is wasted.

The ISACA numbers are a snapshot. Sixty percent using AI, 18% knowing about policy. The fix isn't a better policy document. It's a program: see what's there, give people a real alternative, make the rules known and actionable, and then hold the line. Governance catches up to adoption when visibility and enablement come first.


We help build AI governance programs that match how your organization actually uses AI. Contact us for independent AI risk assessments and policy-aligned enablement.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review