Treat the EU AI Act as a Brussels problem and you'll miss the deadline. The Act applies to providers and deployers based on where the AI system is placed on the market or put into service, not where the company sits. A US vendor that sells an AI-powered recruitment tool to one customer in Germany, or a US enterprise that deploys a credit-scoring model for its EU subsidiary, can be in scope. August 2, 2026 is the date when the full set of high-risk AI system requirements becomes enforceable. For US teams, that means understanding what "high-risk" means, who bears which obligations, and what actually has to be in place by then.
When "We're Not in Europe" Isn't Enough
The Act's territorial trigger is placing on the market or putting into service in the Union. "Placing on the market" is the first making available of an AI system on the EU market; "putting into service" is the supply of an AI system for first use in the Union for its intended purpose. If you're a US provider and you sell or license an AI system to an EU customer, or if you're a US deployer and you operate an AI system in the EU (including via a local entity or branch), you're in scope. Outputs used in the EU can also matter: if your AI generates decisions or content that are used in the EU, the applicability analysis gets more fact-specific but doesn't disappear. The practical takeaway: if you have EU customers, EU operations, or EU-facing outputs for systems that could fall under the high-risk categories, assume you need to run the classification and obligations check. Ignoring the Act because your HQ is in California is a compliance risk, not a legal exemption.
What Counts as High-Risk—Annex III Bites Harder Than You Think
High-risk AI under the Act has two main hooks. First, AI systems that are safety components of products already regulated by EU harmonisation law (e.g. machinery, medical devices, lifts, certain automotive and aviation systems) are high-risk when they meet the conditions in the Act. Second, and more relevant to many US software and SaaS businesses, Annex III lists specific use-case domains. These include administration of justice and democracy; migration, asylum and border control; law enforcement; access to and enjoyment of essential private and public services (e.g. credit, insurance, benefits); employment, worker management and self-employment; education and vocational training; critical infrastructure; and biometric identification, categorisation and emotion recognition where permitted by law.
That list pulls in a lot of everyday enterprise AI. Recruitment and selection (CV screening, candidate ranking, interview tools), credit scoring and creditworthiness, life and health insurance risk and pricing, eligibility for benefits or essential services, performance evaluation and task allocation, student assessment and admission, and biometric systems used for identification or categorisation—all can be high-risk depending on the exact use. The line that teams often miss: it's not "we use AI somewhere." It's whether the AI system is used in one of those domains in a way that could significantly affect people's health, safety or fundamental rights. If your US company sells HR tech, fintech, insurtech, or edtech into the EU, or runs those use cases there, you need to map your systems to Annex III and the two-part high-risk definition. A surprisingly large number of B2B SaaS products land in at least one Annex III bucket when they're used in the EU.
Provider vs Deployer: Two Roles, Two Sets of Obligations
The Act splits duties between providers (who develop or place on the market / put into service) and deployers (who use the system under their authority). You can be both—for example if you build and operate your own high-risk AI in the EU.
Providers of high-risk AI systems must meet design- and lifecycle obligations: risk management (Article 9), data and data governance (Article 10), technical documentation (Article 11), record-keeping and transparency (Articles 12–13), human oversight (Article 14), accuracy, robustness and cybersecurity (Article 15), and a quality management system (Article 17). They must also comply with conformity assessment and, where applicable, affix CE marking and draw up an EU declaration of conformity before placing the system on the market or putting it into service. For many high-risk systems the route is the internal control procedure (Annex VII); for others, including certain law enforcement, migration, critical infrastructure and biometric uses, a notified body may be required. Either way, the system must be registered in the EU database. If you're a US provider shipping a high-risk AI system into the EU, you're on the hook for the full technical and process stack—not just a policy.
Deployers have their own list: use the system in line with the provider's instructions, ensure human oversight, monitor operation and report serious incidents, keep logs where required, and for certain public-sector or high-impact uses conduct a fundamental rights impact assessment (Article 27). They don't do conformity assessment or CE marking; that's on the provider. But they must not put a high-risk system into service unless the provider has complied, and they remain responsible for their own use and oversight. A US company with an EU subsidiary that deploys a high-risk AI system (e.g. recruitment or credit) must ensure the provider has conformed and must fulfil deployer duties for that use.
For US teams the implication is clear: if you're a provider, August 2026 is the deadline to have your high-risk systems through conformity assessment and onto the market in compliance. If you're a deployer, you need to know that your providers are compliant and that your own processes (oversight, logging, incident reporting, impact assessment where required) are in place.
The August 2026 Deadline and What "Enforceable" Means
The Act entered into force in August 2024. Prohibited practices and some other rules are already applicable. August 2, 2026 is when the core high-risk obligations—including the full provider and deployer requirements for high-risk AI systems—apply. From that date, placing a high-risk AI system on the EU market or putting it into service without meeting the requirements (including conformity assessment and, where relevant, CE marking) is non-compliant. Penalties for high-risk violations go up to €15 million or 3% of global annual turnover, whichever is higher. The deadline is real, and the cost of missing it isn't just reputational.
There are limited transitional rules for systems already placed on the market or put into service before certain dates; for example, some high-risk systems already in the market before August 2026 may have until August 2030 to comply if they're not substantially modified and under conditions set in the Act. But that's an exception that depends on precise timing and type of system. The default assumption for any new or materially changed high-risk system should be: it must comply by August 2, 2026 if it's placed on the market or put into service from then on.
What US Companies Should Do Now
First, map EU touchpoints. Identify any AI systems you place on the EU market or put into service in the EU (including via subsidiaries, partners or customers). That includes SaaS sold to EU customers and internal systems used by EU operations.
Second, classify. For each such system, determine whether it falls under the product-safety limb (safety component of a regulated product) or Annex III (use in the listed domains). If it does, treat it as high-risk unless a narrow exemption clearly applies. Document the classification and the reasoning.
Third, assign provider vs deployer. If you develop and place on the market or put into service, you're a provider and you need the full compliance stack (risk management, data governance, technical documentation, QMS, conformity assessment, registration). If you only deploy a third party's system, you're a deployer: ensure the provider is compliant and meet deployer obligations (instructions, oversight, monitoring, incidents, logs, impact assessment where required).
Fourth, plan to the date. Conformity assessment, technical documentation, QMS updates and (where needed) notified body involvement take time. Fundamental rights impact assessment and deployer-side process take time too. Treat the run-up to August 2026 as implementation time, not discovery time. Many US companies are already late to the mapping and classification step; the ones that treat 2026 as "still far away" will be the ones scrambling in 2025.
The EU AI Act August 2026 deadline isn't someone else's problem. For US providers and deployers with EU market or in-EU use, high-risk AI system requirements are mandatory. Knowing whether you're in scope, which systems are high-risk, and whether you're provider or deployer is the only way to avoid a nasty surprise when the deadline hits.