Back to Blog
AI PolicyAcceptable UseGovernanceTemplate

Writing an AI Acceptable Use Policy That People Actually Read: A Practitioner's Template

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

Most AI acceptable use policies are legal boilerplate. They run to fifteen pages, define "artificial intelligence" in three paragraphs, and sit on the intranet until an audit asks for them. Nobody reads them. Nobody can find the answer to "can I use ChatGPT for this?" in under five minutes. When 60% of employees are already using AI at work and only 18% know the company has a policy, the policy isn't failing because it's too short. It's failing because it's unusable. What actually works is a tiered structure (prohibited, cautious, approved), plain-language rules tied to real scenarios, and a one-page quick-reference card that lives where people look. Here's what to include, what to skip, and how to enforce it.

The Tiered Structure

A policy that says "use AI responsibly" or "don't put confidential data in AI" is too vague to follow. A policy that lists every tool and every scenario is unmaintainable. A three-tier structure solves both problems: it's short enough to scan and specific enough to act on.

Prohibited. This tier is the hard no. No exceptions without a formal, documented waiver. Typical entries: don't input confidential, proprietary, or regulated data (customer PII, source code, strategic plans, health or financial data) into AI tools that aren't on the approved list. Don't use AI for decisions that directly affect people's rights or eligibility (hiring, credit, benefits, discipline) unless the system has been through your formal risk assessment and approval process. Don't use AI to generate content that will be presented as human-authored without disclosure where the organization has committed to transparency. Don't circumvent security or access controls using AI. List the categories and the rationale in one or two sentences each. Employees need to know the bright lines. Prohibited means "don't do this, full stop."

Cautious. This tier is "you can, but only under these conditions." Use approved tools only. Use them only for the use cases and data types they're approved for. Don't use AI for the prohibited categories above. If you're unsure whether your use case or data type is allowed, ask (and name who to ask: a mailbox, a team, or a page that routes the question). Cautious is where most day-to-day use lives. It's the default: use the approved stack, stay within the guardrails, and when in doubt, check. It keeps the policy from being a blanket no while making the boundaries clear.

Approved. This tier is the positive list. Which tools, for which use cases, with what data? Publish a short approved list (or a link to the living list) so people don't have to guess. "Approved for general productivity: Tool A (non-sensitive internal use only), Tool B (public and internal non-confidential content). Approved for code assistance: Tool C (no proprietary or customer data)." The list can live in a separate doc or portal that you update as you add tools from your sandbox or procurement. The policy itself just needs to say: use only tools and use cases on the approved list, and follow the data and use-case restrictions stated there.

With three tiers, an employee can quickly answer: is this prohibited? If not, is my tool and use case on the approved list? If not, I need to ask. No wading through legalese.

Plain Language and Real Scenarios

Legal and compliance will want definitions and carve-outs. Keep the main policy in plain language and put the definitions in an appendix or a separate "definitions" section. In the body, use scenarios. "Don't paste customer contact lists into ChatGPT" is clearer than "don't input PII into non-approved AI systems." "Don't use an unvetted AI tool to screen resumes" is clearer than "AI-assisted decision-making in employment contexts requires prior risk assessment." One or two sentence examples under each tier help people recognize their situation. You're not writing for lawyers in the primary doc. You're writing for the person who has a tab open and needs an answer in thirty seconds.

Map the scenarios to your real risks. If you're in healthcare, the examples should mention patient data and clinical use. If you're in finance, mention customer financials and credit. If you're in tech, mention source code and product roadmaps. Generic policies get ignored because they don't feel relevant. Scenarios that match your industry and your past incidents get remembered.

The One-Page Quick-Reference Card

The full policy can be the source of truth for audits and legal. For daily use, create a one-pager. One side of a sheet, or one screen. Title: "AI at [Company]: Quick Reference." Three sections: Prohibited (bullet list, 3–5 items). Cautious (use approved tools only, stay in scope, when in doubt ask). Approved (link or short list). Plus: "Questions? [Contact or link]." That's it. No definitions. No exceptions. No fine print. This is the thing you attach to onboarding, pin in Slack, and hand out when someone says "what's the policy?" If the one-pager and the full policy conflict, the full policy wins, but 95% of the time the one-pager is enough. People will only open the long doc when they have a borderline case or need to cite something for a waiver.

Keep the one-pager in sync with the full policy when you add or remove approved tools or change a tier. A stale one-pager undermines trust in the whole program.

What to Skip

Resist the urge to pad the policy with everything that could possibly go wrong. Skip the five-paragraph history of AI. Skip the exhaustive list of "AI" that tries to cover every algorithm (you can define "AI" for scope in one sentence: "AI tools and systems that accept user input and generate or influence output, including generative AI, coding assistants, and AI-powered SaaS features"). Skip separate sections for every department unless you have genuinely different rules. Skip making the policy the place where you document your risk assessment methodology; that belongs in a separate governance doc. The policy should answer: what can I use, for what, with what data? The rest is supporting material.

Also skip weasel language. "Employees should avoid..." and "It is recommended that..." create ambiguity. Use "must not" for prohibited and "may only" or "must use only" for approved. If something is truly discretionary, say "ask [X] for guidance" rather than leaving it vague.

How to Enforce It

A policy nobody enforces is a policy nobody believes. Enforcement doesn't have to mean punishment first. It means: we check, we follow up, and we have consequences when someone crosses a bright line.

Discovery. You can't enforce what you can't see. Use the channels you have (CASB, SSO, network, surveys) to detect unsanctioned AI use. Discovery isn't automatically punitive. It's the input to "we found this use; is it in scope or not?" Many violations are ignorance, not defiance. First step is awareness and correction.

Triage. When you find use that doesn't match the policy, triage. Is it prohibited (e.g., confidential data in a non-approved tool)? That's a serious conversation: contain the exposure, document the incident, and apply your existing discipline and incident process. Is it cautious-tier drift (e.g., approved tool used for something not on the list)? That's a correction: point them to the right tool or use case, or open a request to expand the approved list. Is it someone who didn't know the policy? Training and communication, plus a clear pointer to the one-pager. Escalate based on severity and intent, not one size fits all.

Consistency. Apply the same logic to the same situations. If you make an example of one person for pasting code into ChatGPT and ignore the same behavior elsewhere, the policy loses credibility. Document how you're handling violations (anonymized or aggregated) so that enforcement is predictable. And enforce upward: if leadership flouts the policy, the rest of the organization will too.

Feedback loop. When people ask "can I use X for Y?" use those questions to update the approved list, the scenarios, or the one-pager. If the same question keeps coming, the policy or the approved options aren't clear enough. Enforcement isn't only about catching violations. It's about making the policy easier to follow so there are fewer violations to catch.

An AI acceptable use policy that people actually read is short, tiered, scenario-based, and backed by a one-pager and consistent follow-through. Skip the boilerplate. Write for the person who has a tab open and needs to know: can I do this or not?


Updating your AI policy or building enforcement into your governance program? We help with independent AI risk assessments and policy-aligned controls. Contact us.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review