The Department of Justice’s AI Litigation Task Force, announced in January 2026, doesn’t repeal a single state statute. It doesn’t have to. Its job is to identify state AI laws that conflict with the administration’s “minimally burdensome national policy framework” and to challenge them in court (preemption, Dormant Commerce Clause, or whatever theory the Attorney General thinks fits). For anyone building or deploying AI across state lines, the practical question is: which laws are most likely to end up in the Task Force’s crosshairs, and why?
The answer turns on three things: what the December 2025 executive order actually calls out, which state regimes impose the heaviest and most distinctive obligations, and how the Commerce Department’s referral pipeline is set up to feed the DOJ.
The Pipeline: Commerce Evaluates, DOJ Litigates
The Task Force didn’t emerge in a vacuum. It was created under Executive Order 14,365, “Ensuring a National Policy Framework for Artificial Intelligence,” signed December 11, 2025. The order does two things that matter for state law. First, it instructs the Attorney General to stand up a task force to challenge state AI laws that are “inconsistent” with federal policy—including claims that they unconstitutionally regulate interstate commerce, are preempted by existing federal law, or are “otherwise unlawful” in the AG’s view. Second, it tasks the Secretary of Commerce with evaluating state AI laws and identifying those that are “onerous” or conflict with federal policy. That evaluation is due within 90 days of the order. The Commerce assessment is explicitly framed as input for the Task Force: state laws that Commerce flags are the ones most likely to become litigation targets.
The order also names what the administration considers the core problem. State laws that require AI systems to “alter truthful outputs” or to “embed ideological bias” in the name of non-discrimination are called out as conflicting with a federal framework that “prioritizes truth.” That language is a direct shot at anti-discrimination and fairness mandates that could be characterized as compelling certain outputs or disclosures. It doesn’t take much to see how a state law that mandates impact assessments, bias testing, or “reasonable care” to avoid algorithmic discrimination could be reframed in litigation as forcing developers to change model behavior in ways the federal government disfavors. Whether that argument will win in court is another matter. But it tells you which kind of state law the administration is most interested in challenging.
Colorado: First in Line
Colorado’s Consumer Protections for Artificial Intelligence Act (SB 24-205) is the obvious first target. It’s the first comprehensive state AI law in the U.S., it’s already been singled out in public discussion of the order, and it fits the “alter truthful outputs” framing more neatly than any other state regime. The law imposes developer and deployer duties for “high-risk” AI systems: risk disclosure to the Attorney General within 90 days of discovering or receiving a credible report of algorithmic discrimination, public statements on how discrimination risks are managed, impact assessments, consumer notice when a high-risk system is used in a consequential decision, and appeal rights with human review where feasible. Penalties run up to $20,000 per violation, with enforcement vested solely in the Colorado AG.
From the Task Force’s perspective, the law does several things that invite challenge. It creates a detailed, state-specific regime for “algorithmic discrimination” that goes beyond federal anti-discrimination law and ties compliance to state-defined notions of reasonable care and risk. It requires disclosure to a state AG and public statements that could be characterized as compelled speech. And because the law applies to systems that make or substantially influence “consequential” decisions affecting Colorado residents—regardless of where the developer or deployer is located—it has clear extraterritorial reach. That’s the classic setup for a Dormant Commerce Clause claim: one state’s rules effectively regulating conduct and product design nationwide. Colorado has already delayed the law’s effective date to June 2026 and added a cure period through June 2027, partly in response to industry pushback and partly, one assumes, in the shadow of federal action. None of that insulates the law from a preemption or constitutional challenge. If Commerce’s 90-day evaluation lists Colorado, expect the Task Force to treat it as a priority.
California: Employment and the Expansion of Existing Law
California didn’t pass a standalone “AI Act” like Colorado. Instead, the Civil Rights Council adopted regulations that clarify how the state’s existing employment anti-discrimination laws apply to AI and automated decision systems. Those regulations took effect in October 2025. They expand who counts as an “agent” (so that vendors performing recruitment or screening can be liable), define “automated decision system” for employment contexts, impose record-retention requirements, and create incentives and risks around anti-bias testing. The Council’s position is that this is interpretation of current law, not new prohibition. But from a preemption or commerce perspective, the effect is the same: employers and vendors operating across the country must comply with California’s view of how AI may and may not be used in hiring, promotion, and related decisions.
That makes California’s employment AI rules a plausible second wave for the Task Force. The argument would be that the state is imposing a distinct, burdensome layer of AI-specific obligations that affect the design and deployment of systems used nationwide, again triggering Dormant Commerce Clause or conflict-preemption concerns. The “alter truthful outputs” angle is weaker here than in Colorado—the regulations are about discrimination and process, not about forcing models to say or not say something—but the patchwork argument is strong. The political salience is high too: California is the largest state and its rules influence vendor behavior everywhere. If the administration wants to signal that state-by-state AI rules are unacceptable, California is a high-impact target.
Texas: New but Not Off the Radar
Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA), signed in June 2025 and effective January 2026, is the third comprehensive state AI law. It prohibits AI systems designed to discriminate unlawfully, to create or facilitate CSAM or deepfakes of minors, to infringe constitutional rights, or to manipulate behavior toward self-harm or crime. It requires clear disclosures before or during consumer-facing AI interactions, restricts biometric capture and storage without informed consent (with carve-outs), and creates an AI advisory council that is explicitly barred from issuing binding regulations. It also includes a 36-month regulatory sandbox and safe harbors for entities following recognized risk-management frameworks.
TRAIGA is less prescriptive than Colorado’s law in some ways—no mandatory 90-day AG disclosure for algorithmic discrimination, no impact-assessment regime—but it still creates state-specific definitions, prohibitions, and disclosure rules that apply to systems used in or affecting Texas. For a company operating in all fifty states, that’s another layer. The Task Force could train on Texas on the same theory: state law that effectively governs the design and deployment of AI systems in interstate commerce. Texas may be a less attractive first target than Colorado simply because the administration may not want to pick a fight with a Republican-led state out of the gate. But if the goal is to establish that any state AI regime that goes beyond a minimal federal floor is vulnerable, Texas’s law is in the same conceptual bucket.
What Has to Happen Before a Law Falls
The executive order does not preempt anything by itself. It directs the Attorney General to create the Task Force and the Commerce Secretary to evaluate state laws. It does not suspend or invalidate state statutes. No state law disappears until a court says so, either in a suit brought by the United States (or another plaintiff) or in a defensive challenge when a state tries to enforce. The Task Force will have to pick targets, file (or support) litigation, and win on preemption or constitutional grounds. Dormant Commerce Clause cases are fact-intensive; preemption depends on whether Congress or federal agencies have occupied the field or whether state law conflicts with federal objectives. Neither is a slam dunk. Some commentators have already suggested that the order’s own use of federal spending conditions (e.g., tying BEAD or other grants to state compliance with federal AI policy) may run into Spending Clause or Tenth Amendment limits. The same courts that might trim state power might also trim executive overreach.
The realistic takeaway is uncertainty, not doom. Colorado is the most likely first target because of its prominence, its explicit algorithmic-discrimination and disclosure regime, and the way the order’s “truthful outputs” language maps onto it. California’s employment rules are a strong candidate for the next wave. Texas sits in the same legal theory but may be deprioritized for political or strategic reasons. Other states with narrower or later-adopted AI rules will be evaluated by Commerce and could be added to the list.
For compliance and risk teams, the move is to stay current. Watch for Commerce’s 90-day evaluation: that document will name names. Watch for the first Task Force complaints or amicus positions: they’ll reveal which legal theories the DOJ is willing to push and how aggressively. And keep building for the state regimes that are already in effect or about to be. Until a court enjoins or invalidates a state law, it remains enforceable. The Task Force changes the odds that some of these laws will be challenged; it doesn’t yet change what the law is.