December 2025: the SEC's Investor Advisory Committee voted to recommend that the Commission issue guidance requiring issuers to define what they mean by "artificial intelligence," disclose board oversight of AI deployment, and report separately on how they use AI in internal operations and in consumer-facing products. The vote was advisory. The Commission doesn't have to do anything with it. Chairman Paul Atkins has already signaled skepticism: he's argued that existing principles-based disclosure rules already capture material AI impacts, and that layering on prescriptive AI disclosure could run counter to the push to streamline filings. The question that matters more than the headline: what does this debate mean for how you draft your 10-K, run your board, and talk to investors about AI?
What the IAC Actually Recommended
The Committee didn't ask for a new AI sub-chapter in Regulation S-K. It recommended folding AI into the disclosure framework that already exists: Item 101 (description of business), Item 103 (legal proceedings), Item 106 (risk factors, in the modernized numbering), and Item 303 (MD&A). AI is a cross-cutting theme; it affects how you run the business, what risks you face, and how performance and outlook might change. The right move is to clarify how existing items apply to AI rather than to invent a standalone "AI disclosure" regime. That's a deliberate choice. It avoids creating a checkbox that every company has to fill regardless of materiality, and it keeps the standard materiality-based. If AI isn't material to your business, you don't have to manufacture a section. If it is, you're already supposed to be talking about it under current rules; the guidance would just make the expectations explicit.
The three substantive asks: define the term when you say "AI" or "machine learning" in your filings or investor materials (a sentence the first time you use it, or a short glossary). Board oversight: disclose how the board or a committee oversees AI deployment (who's responsible, what they're told, how often). That tracks the same logic as cybersecurity and other enterprise risk. Deployment and effects: disclose how you're using AI and, where material, the effects on internal operations and on customers or end users. Internal might be supply chain, HR, finance; consumer-facing might be product features, recommendations, or support. Again, the trigger is materiality. Not every company needs a long narrative. The ones that have made AI central to strategy or operations do.
Why the "Define AI" Ask Is Messier Than It Sounds
The "define what you mean by AI" recommendation draws the most pushback, and the most confusion. The IAC didn't propose a single, Commission-wide definition. It proposed that each issuer define the term for its own filings. Clear upside: flexibility. A biotech using AI for drug discovery and a retailer using it for demand forecasting can each use language that fits their context. Downside: every company could choose a different definition. Investors comparing two 10-Ks might see "AI" in both and assume they're talking about the same thing when they're not. One firm might use "AI" to mean only deep learning models in production; another might include rules-based automation and chatbots. Without a common floor, "we use AI" could mean almost anything. Some commentators have warned that issuer-specific definitions could degenerate into PR: each company picks the definition that makes its use of AI sound as impressive as possible, and comparability suffers.
There's a deeper tension. If the SEC imposed a single definition, it would be accused of being too rigid and of either over- or under-including technologies that evolve fast. If it doesn't, the "define AI" requirement may not deliver what investors want. The best outcome for issuers is probably to define the term narrowly and clearly the first time you use it (e.g., "In this filing, 'AI' refers to machine learning models and related automation that we use for [X, Y, Z]"). That gives readers something to hang on and signals that you're not using the term as a catch-all. It also reduces the risk that the SEC or a plaintiff later argues that your "AI" claims were vague and therefore misleading.
The Commission's Stance: Principles First, Prescription Second
Atkins and others at the Commission have been careful to say that existing disclosure obligations already require companies to discuss material risks and business developments. If AI is material (to strategy, operations, risk, or financial results), Item 101, 103, 106, and 303 already demand that you address it. The SEC has also warned against boilerplate. In speeches and staff guidance, the message has been: don't drop in generic "we may face risks related to AI" language. Be specific. What AI? What risks? What are you doing about them? From the Commission's perspective, the IAC's recommendations might be redundant: we already have materiality and we're already telling people to avoid boilerplate. Adding a formal AI disclosure guidance could be seen as expanding the disclosure burden at a time when the agency is trying to trim it.
That doesn't mean the IAC's work is irrelevant. The Committee was responding to a real gap. Studies and law firm surveys have pointed out that only a fraction of S&P 500 companies provide any AI-related disclosure, and an even smaller fraction disclose board-level oversight of AI. Either (a) AI isn't material to most of those companies, or (b) they're under-disclosing. The IAC was betting on (b): that consistency and clarity would help investors and that explicit guidance would level the playing field. The Commission may still adopt some form of guidance, or it may leave things as they are and rely on exam focus and enforcement to sharpen behavior. Either way, the direction of travel is clear. The SEC is looking at AI in filings. "We didn't have specific guidance" is unlikely to be a defense if your AI narrative is vague or inconsistent with what you actually do.
What's Already Happening: Exams and Enforcement
You don't need new rules to feel the pressure. The Division of Examinations has made AI and automated tools a priority. Examiners are asking how firms describe their use of AI, whether those descriptions match reality, and whether there are policies and procedures governing AI use. On the enforcement side, the SEC has brought AI-washing cases against investment advisers and, in the Presto matter, against a public company. The theories are familiar: materially false or misleading statements about the extent, nature, or ownership of AI capabilities. Even without IAC-backed guidance, the agency is already using existing antifraud and disclosure standards to police AI claims. The IAC recommendations, if adopted, would add clarity. If not adopted, the enforcement and exam focus on AI doesn't go away.
What You Should Do Regardless
Treat the IAC vote as a signal of where sophisticated investors and the Commission's advisory body think disclosure should go, not as a binding checklist.
Tighten your AI narrative in existing items. Where AI is material to business, risk, or results, address it in 101, 103, 106, and 303 with the same rigor you'd apply to any other material topic. Be specific. Avoid generic "AI risk" language. If you use AI in operations or in products, say where, how, and what could go wrong, and what you're doing about it.
Define "AI" where you use it. Even without a rule, adopt the IAC's first recommendation. The first time you use "AI" or "machine learning" in your 10-K or investor materials, add a sentence that defines the term for your company. Keep it short and accurate. That improves readability and reduces the chance that someone later argues you were vague or overclaiming.
Document and disclose board oversight. If the board (or a committee) discusses AI (strategy, risk, resourcing), disclose that. You don't need a separate "AI board report." You need a clear statement of how the board oversees AI deployment, consistent with how you describe oversight of other major risks. The 15% figure the IAC cited (companies that disclose board oversight of AI) is low enough that doing this well is a differentiator.
Align external claims with internal reality. Enforcement has focused on gaps between what companies say about AI and what they do. Audit your filings, website, and investor presentations for AI-related claims. Map each claim to a system, process, or capability. If you can't support it, change the claim. If the tech is third-party or human-assisted, say so. That's the same discipline the SEC is already applying in AI-washing cases.
The IAC's recommendations may or may not become formal guidance. The debate is about how much prescription the Commission wants, not whether AI is a legitimate subject of disclosure. Treat AI as a material topic where it is one, define your terms, show board-level attention, and keep your public story aligned with the facts. Do that, and you're in good shape whether or not the Commission ever adopts the Committee's advice.
We help issuers align AI disclosure with actual use and board oversight. Contact us for AI governance and risk documentation.