Back to Blog
FTCOperation AI ComplyConsumer ProtectionAI RegulationSection 5AI Washing

FTC Operation AI Comply: How the FTC Is Using Existing Consumer Protection Law to Regulate AI

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

State legislatures and the EU are rolling out new AI-specific statutes, the Federal Trade Commission has taken a different path. It isn’t asking Congress for new authority. It’s applying the law that’s already on the books (Section 5 of the FTC Act and related consumer protection tools) to AI marketing, AI-enabled fraud, and AI systems that harm consumers. The vehicle is Operation AI Comply, launched in September 2024: a coordinated crackdown on deceptive AI claims and schemes that has already produced real settlements and a sharp signal to the market.

Chair Lina Khan has been consistent since well before the operation had a name: there is no AI exemption from the laws on the books. Using AI to trick, mislead, or defraud people is illegal. Advertising AI capabilities you can’t substantiate is too. The FTC’s 2023 business guidance, “Keep your AI claims in check,” spelled that out; Operation AI Comply is the enforcement follow-through.

What Is Operation AI Comply?

Operation AI Comply isn’t a new regulation or a voluntary program. It’s a coordinated set of enforcement actions, announced together to maximize visibility, targeting three kinds of conduct: AI washing (false or unsubstantiated claims that a product uses or is powered by AI), AI-enabled deception (tools or schemes that use AI to mislead or defraud consumers), and unsubstantiated performance claims (marketing that overstates what the AI actually does).

At launch, the FTC announced five actions. Three have become the reference cases.

DoNotPay marketed itself as “the world’s first robot lawyer” and claimed its chatbot could replace human attorneys, generate valid legal documents, and even help users “sue for assault without a lawyer.” The FTC’s complaint alleged that the company never tested whether its AI performed at a lawyer’s level and didn’t use attorneys to validate the quality of its legal outputs. In early 2025 the Commission finalized an order: DoNotPay paid $193,000, must notify past subscribers about the settlement, and is prohibited from claiming its service performs like a real lawyer unless it has evidence to back it up. The vote was unanimous.

Rytr sold an AI “Testimonial & Review” product that let subscribers generate detailed, specific-sounding consumer reviews at scale—reviews that were effectively fabricated. The FTC charged that the service was built to produce false and deceptive reviews and that some subscribers used it to create hundreds or thousands of them. The agency approved a final order in December 2024 barring Rytr from advertising or selling any service dedicated to generating consumer reviews or testimonials. (That order was later reopened and set aside under the new administration in late 2025—a reminder that enforcement priorities can shift, even when the underlying legal theory doesn’t.)

Ascend Ecom, Ecommerce Empire Builders, and FBA Machine were targeted for making false earnings claims about AI-powered online business opportunities—the classic “get rich with AI” pitch without the proof. The pattern is the same: AI as marketing hook, claims that can’t be substantiated, and conduct that falls squarely under the FTC’s existing deception and unfairness authority.

Operation AI Comply is best understood as a named enforcement initiative, not a new rulebook. The rules were already there. The FTC is applying them to AI with a visible, coordinated push.

The FTC’s authority here comes from Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices in or affecting commerce. The agency has long used Section 5 against false advertising, unsubstantiated claims, and schemes that harm consumers. Nothing in the statute carves out AI. When a company claims its product is powered by AI, or that its AI outperforms alternatives, or that consumers can rely on it for high-stakes tasks, those claims are advertising claims. They have to be truthful and substantiated. If they’re not, the FTC can act.

The same goes for AI used as the mechanism of deception. If you build or sell a tool that generates fake reviews, fake testimonials, or deepfakes aimed at misleading people, you’re not insulated because the tool is “AI.” The FTC has made clear that existing law—including the Fair Credit Reporting Act and Equal Credit Opportunity Act where relevant—applies fully to AI systems. The 2023 joint statement from the FTC, DOJ, CFPB, and EEOC put it in one line: there is no AI exemption. Operation AI Comply is the enforcement expression of that position.

From a policy perspective, the FTC’s approach is strategically efficient. It doesn’t need Congress to pass an “AI Act.” It doesn’t need a new rulemaking. It’s interpreting and enforcing decades-old consumer protection law in a new technological context. That’s slower to address some issues (e.g., novel questions of algorithmic discrimination or transparency that might warrant new rules) but fast and flexible for the low-hanging fruit: hype, fraud, and unsubstantiated marketing.

What Gets You in Trouble

The FTC’s own guidance and the Operation AI Comply cases distill into a few practical triggers.

Claiming AI when it’s not really AI, or overclaiming what the AI does. “AI” is a marketing term. The FTC has warned that advertisers overuse and abuse it. If you say your product uses AI, it should. If you say it’s better because of AI, you need competent, reliable evidence. DoNotPay didn’t have evidence that its chatbot performed like a lawyer; that was enough.

Selling or using AI to generate deceptive content. Fake reviews, fake testimonials, and similarly inauthentic material are already in the FTC’s crosshairs. Wrapping that in an “AI tool” doesn’t change the analysis. Rytr’s product was designed to produce reviews that looked real but weren’t; the FTC treated that as unfair and deceptive.

Promising outcomes you can’t support. Earnings claims, performance claims, and “replace a professional” claims all require substantiation. The e-commerce coaching cases fit here: AI-powered business opportunity claims without adequate proof.

Ignoring foreseeable harm. The FTC has also signaled that companies can’t hide behind “we didn’t know how the model worked.” You’re expected to understand reasonably foreseeable risks and to have evaluated your system before release. The “black box” excuse doesn’t create a safe harbor.

None of this is AI-specific law. It’s standard deception and unfairness doctrine applied to AI claims and AI-enabled conduct.

What Practitioners Should Do

If you’re in marketing, product, or compliance, the implications are straightforward.

Audit AI-related claims. Every place you say “AI,” “machine learning,” “automated,” or similar—website, ads, sales collateral, press—should be mapped. For each claim, ask: Is it accurate? Can we substantiate it? Would the FTC view it as deceptive or unsubstantiated? The DoNotPay order is a template: don’t claim your product replaces or performs like a licensed professional unless you have the evidence.

Treat “AI” as a compliance flag. In many organizations, “AI” still gets a pass—“it’s just marketing.” The FTC has made clear it’s not. Any claim that implies capability, performance, or superiority should go through the same substantiation and review process you’d use for other high-stakes advertising claims.

Don’t assume tools are harmless. If you’re building or distributing tools that could be used to generate fake reviews, fake testimonials, or other inauthentic content, the Rytr case is a warning. The FTC can target the provider, not just the end user. Design and positioning matter.

Watch the horizon. Operation AI Comply continues under the current enforcement posture. New actions will clarify where the line is (e.g., how much substantiation is enough, how the FTC treats “AI-assisted” vs “AI-powered,” and how it applies Section 5 to algorithmic discrimination or bias. State laws (like Colorado’s) and the EU AI Act add new, positive obligations; the FTC’s work is a reminder that the old obligations never went away.

The FTC’s bet is that existing consumer protection law is enough to police a lot of AI misconduct. So far, Operation AI Comply has made that bet visible: real money, real orders, and a clear signal that “we use AI” is not a regulatory free pass.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review