If you're an American SaaS company with a few EU customers, it's tempting to assume the EU AI Act is someone else's problem. The regulation is European; your HQ is in Austin or San Francisco. Your AI runs in US or generic cloud regions. Surely that puts you outside the fence.
Wrong. The Act is built to apply beyond EU borders on purpose. Two ideas do the work: where you "place on the market" or "put into service" AI systems, and where the output of those systems is used. For most US SaaS shops, the second hook is the one that bites.
The Two Hooks That Pull You In
Article 2 of the EU AI Act spells out scope in plain terms. Two limbs matter for a provider with no establishment in the Union.
First: providers that place on the market or put into service AI systems (or general-purpose AI models) in the Union are in scope, "irrespective of whether those providers are established or located within the Union or in a third country." If you're selling or deploying your AI product into the EU market—customers in the EU, contracts governed by EU law, or services directed at EU users—you're placing on the market in the Union. Your company’s location is irrelevant.
Second: providers and deployers "that have their place of establishment or are located in a third country" are in scope "where the output produced by the AI system is used in the Union." No sale in the EU is required. No EU entity need be the direct customer. If the output of your AI is used in the EU, the Act applies to you. That’s the clause that catches a lot of US teams off guard.
"Output" here means the result of the AI system’s processing—a recommendation, a score, a decision, a classification, generated text, whatever the system produces. "Used in the Union" means that output is consumed or applied inside the EU: by an EU-based company, by an EU user, or in a process that affects people in the EU. A US vendor whose platform is used by an EU company to screen resumes, set prices, or personalize content is producing output that is used in the Union. The vendor is in scope as a provider (and the EU company as deployer). Geography of servers or corporate HQ doesn’t change that.
Why "We Only Have a Few EU Customers" Doesn’t Get You Off the Hook
A common line is: "We’re US-based; we have some EU users but we’re not really in Europe." The Act doesn’t care about headcount or revenue share. It cares whether you place AI on the EU market or whether your AI’s output is used in the EU. If an EU business uses your AI-powered HR screening tool, your AI’s output is used in the Union. If an EU consumer gets recommendations from your app, same. If you’re a US company and an EU deployer embeds your model or API into their product and that product is used in the EU, your output is used in the Union. You don’t need an EU subsidiary, an EU data center, or even an EU contract in your name. The flow of output is what matters.
One nuance: the text says output "used in the Union," not "could be used" or "might eventually be used." The focus is on actual use. If no one in the EU is actually using your AI’s output, you’re not caught by this limb. But the moment an EU customer, partner, or end-user uses that output—for decisions, for content, for recommendations, for anything the Act regulates—you’re in scope. For typical B2B SaaS selling into the EU, that moment is usually already in the past.
The Outsourcing Trap (Recital 22)
Recital 22 closes a different loophole. Suppose an EU company contracts with a US (or other third-country) vendor to run a high-risk AI task. The AI runs abroad; data is sent out, processed, and the result is sent back. The AI system is never "placed on the market" or "put into service" inside the EU in the traditional sense. The EU company is just buying a service.
The recital says the Regulation still applies to that third-country provider, "to the extent the output produced by those systems is intended to be used in the Union." So if you’re a US SaaS or AI vendor and an EU entity is your customer—using your output for something that happens in the EU—you’re in scope. You can’t avoid the Act by hosting everything in the US and only shipping results back. The EU is explicitly trying to prevent circumvention by outsourcing. If the output is used in the Union, the provider is subject to the Act.
That’s especially relevant for American AI infrastructure providers, model APIs, and vertical SaaS that sell to EU enterprises. You’re not "just" a subcontractor. You’re a provider whose output is used in the Union, with the obligations that follow.
Who’s In, Who’s Out, and the Gray in Between
Clearly in: You sell an AI product (or API) to EU customers, or your product is used by EU businesses or consumers and the AI’s output is used in the EU. You’re a provider (and possibly deployer) in scope.
Clearly out: Your AI is used only outside the EU and its output is not used in the Union. No EU customers, no EU users, no flow of output into the EU. The Act doesn’t apply to you on territorial grounds.
Gray: You have EU users but the AI is used for minimal-risk applications (e.g. spam filtering, basic recommendations with no high-risk use). You’re still in scope as a provider placing on the market or whose output is used in the Union; the difference is that your obligations are lighter (e.g. transparency for limited-risk, or none for minimal risk). So "we’re only doing low-risk stuff in the EU" doesn’t mean "the Act doesn’t apply." It means "the Act applies, but with a smaller set of duties."
Another gray area: B2B2C. Your direct customer is a US company, but that company’s product is used by EU end-users and your AI’s output feeds that product. If the output is used in the EU (e.g. the US company’s EU users see recommendations or decisions based on your AI), you’re still producing output used in the Union. The chain doesn’t break because there’s an intermediary.
What Actually Happens When You’re In Scope
Being in scope doesn’t automatically mean "high-risk" obligations. The Act is risk-tiered. Unacceptable-risk AI is prohibited. High-risk AI (e.g. in employment, credit, essential services, certain safety components) gets the full weight: risk management, data governance, technical documentation, human oversight, conformity, and so on. Limited-risk systems face mainly transparency duties. Minimal-risk systems face no specific obligations. The next step is to classify your systems and then apply the rules that attach to that tier. For many American SaaS companies, the practical move is: (1) confirm you’re in scope under Article 2, (2) map your use cases to the Act’s risk categories, and (3) build the compliance that matches—starting with high-risk if you have it.
Penalties for serious non-compliance go up to €35 million or 7% of global annual turnover, whichever is higher. Enforcement is by EU member-state authorities, with coordination at Union level. "We’re not in Europe" is not a defense when your output is used there.
The Bottom Line for American SaaS
The EU AI Act’s extraterritorial reach is deliberate. It applies to you if you place AI on the EU market or if the output of your AI is used in the Union. For US-based SaaS, that usually means: if you have EU customers or your product is used in a way that puts your AI’s output in the hands of EU users or processes, you’re in. Get clear on that first. Then classify your systems, and treat compliance as an engineering and governance problem, not a box to tick later.
We help US SaaS companies assess EU AI Act scope, classify risk, and build compliance. Contact us for scope assessment and readiness for the EU AI Act and other regimes.