AI is explicitly on the list. The SEC Division of Examinations released its 2026 fiscal year priorities in November 2025 (the first under Chairman Paul Atkins), and artificial intelligence is there. That doesn't mean examiners will show up with a checklist titled "AI." They'll be looking at how you use automated investment tools, algorithms, and AI across fraud prevention, AML, trading, and back-office operations, and whether what you say about those systems matches what you do. If you're an investment adviser or broker-dealer using or marketing AI, here's what to expect and how to prepare.
What the 2026 Priorities Actually Say About AI
The priorities document groups AI with "emerging financial technology" and ties it to information security and operational resiliency. The Division will focus on risks associated with automated investment tools, AI technologies, and trading algorithms. The language is deliberate: they will assess whether firms' representations regarding these technologies are accurate, whether operations and controls are consistent with disclosures to investors, whether use of such tools leads to advice or recommendations consistent with investor profiles and strategies, and whether controls exist to monitor that advice and those recommendations. For AI specifically, the Division will focus on whether firms accurately disclose their AI capabilities and whether they have appropriate controls to monitor their use of AI.
You're not being examined on "AI" in the abstract. You're being examined on accuracy of claims, alignment between what you tell investors and what you do, and the existence of policies and procedures that govern and supervise AI use. The bar is "can you show that your AI-related disclosures are true and that you're supervising the systems behind them?" not "do you have an AI policy?"
The AI Washing Precedent
The 2026 priorities didn't come out of nowhere. In March 2024 the SEC settled with two investment advisers for AI-related misconduct. Delphia (USA) Inc. claimed it used client data from social media and banking accounts to make investment decisions. During a 2021 exam, Delphia admitted it had never used that data or built algorithms that relied on it, and kept making the claims anyway. Penalty: $225,000. Global Predictions, Inc. touted "expert AI-driven forecasts" and called itself the "first regulated AI financial advisor." The SEC didn't buy it. $175,000. Four hundred thousand in civil penalties between them, and a clear signal: examiners will compare your marketing and disclosures to your actual capabilities and implementation. If you say you use AI, they'll want to see where, how, and whether it's real machine learning or rule-based automation dressed up as AI. That distinction matters. Calling a rules engine "AI" when it isn't is exactly the kind of gap the Division is now primed to find.
What will examiners actually ask for?
Representations vs. reality: expect requests for materials that show what you've told investors and clients about AI, automated tools, or algorithms, and then evidence that your systems and processes match. That means marketing copy, Form ADV narrative, pitch decks, and website language on one side, and technical or operational documentation on the other. If you say you use AI for portfolio construction, they'll want to see how that works, what data feeds it, and who oversees it. If you don't actually use ML in the way you've described, or you've oversold it, that's where exam findings and potential referrals start.
Policies and procedures to monitor and supervise AI: the priorities explicitly call out whether firms have designed and implemented policies and procedures to monitor and supervise the use of AI for fraud prevention and detection, back-office operations, AML, and trading. You need something in writing that addresses those use cases (who's responsible, how often it's reviewed, what triggers an escalation, and how you ensure that AI-driven outputs such as alerts, recommendations, or trades are consistent with your fiduciary or best-interest obligations). A one-page "we use AI responsibly" policy won't cut it if you can't point to specific procedures for the places AI actually touches the business.
Data sources and inputs: automated investment tools and algorithms are only as good as their inputs. Examiners are likely to ask what data your AI or algos use, where it comes from, and how you validate or control it. The Delphia case was partly about claimed data use that didn't exist. The flip side is firms that do use nonstandard or alternative data: you need to be able to explain how that data is used, whether it's appropriate for the strategy or advice you're giving, and whether your disclosures reflect it. Gaps between stated and actual data sources are a fast path to a deficiency or worse.
Advice quality and investor alignment: the Division will assess whether the use of automated tools leads to advice or recommendations consistent with investor profiles and stated strategies. Even if your AI or algo is "real," examiners will care whether its outputs are appropriate for the client. That ties into Reg BI for broker-dealers (care obligation, conflict disclosure) and fiduciary duty for advisers. You need to be able to show that you've considered how the tool's outputs align with client objectives, time horizons, and risk tolerance, and that you're not just plugging everyone into the same model without oversight.
AI in fraud prevention, AML, and trading: the priorities name these functions explicitly. If you use AI for any of them, expect questions about how it's used, how it's tested, and how you monitor for errors or bias. For AML in particular, the Division has long cared about the adequacy of programs; layering AI on top means they'll want to see that the AI component is governed, validated, and subject to the same kind of oversight you'd apply to a traditional rule set. For trading, the focus will be on whether algorithms are supervised, whether they can produce recommendations or execution that conflict with client interests, and whether you've disclosed the role of automation.
The "Not a Gotcha" Signal (and Why It Still Matters)
Chairman Atkins stated that examinations "should not be a 'gotcha' exercise" and that the Division aims for transparency, consistency, and "deliberate and active partnerships with compliance professionals." Some read that as a softer approach: fewer enforcement referrals, more opportunity to fix issues during the exam. That may be true. It doesn't mean examiners will skip AI or give you a pass if your disclosures are misleading or your controls are missing. The best way to benefit from a more collaborative stance is to show up with a coherent story: accurate disclosures, documented procedures, and evidence that you're supervising the technology you're using and marketing. The firms that get through clean are the ones that can produce that story quickly. The ones that can't will still face findings or referrals; the difference may be how much runway they get to remediate before a referral.
How to Prepare
Audit your AI-related disclosures. Pull every place you describe your use of AI, algorithms, or automated tools (ADV, marketing, website, RFP responses). For each claim, map it to the actual system or process. If you say "AI-driven," be able to show what's driven and how. If you've never used certain data or methods, stop saying you do. Align the narrative to reality before the exam, not after.
Document policies and procedures for AI use. For each area where you use AI (trading, AML, fraud detection, back-office, investment recommendations), have written procedures that cover purpose, ownership, inputs and data sources, oversight and review cadence, and escalation. Tie them to your existing compliance and supervisory framework so examiners can see that AI is not an ungoverned island.
Run a representation vs. reality check. Have compliance or a designated lead compare your external statements about AI to your internal capabilities. Treat it like a mini-exam: if an examiner asked "prove this claim," could you? If not, change the claim or change the capability.
Know your data. Document what data feeds your automated tools and AI, where it comes from, and how you validate or control it. Be ready to explain why it's appropriate for the use case and whether your disclosures mention it.
Connect AI to investor protection. Be able to explain how you ensure that AI- or algo-driven advice or recommendations are consistent with investor profiles and strategies. That might be sampling, testing, oversight committees, or model governance. It has to be something you can describe and show.
The 2026 priorities put AI in the same sentence as cybersecurity, Regulation S-P, and operational resiliency. Examiners will be looking at the whole picture: whether you're protecting data, whether you're resilient, and whether your use of new technology is disclosed accurately and supervised effectively. AI is one slice of that. Get the representations right, document the controls, and you're in a much better position when the exam team asks for the first set of documents.
We support AI governance and risk documentation for SEC exam readiness. Get in touch.