International Compliance

US Companies and the EU AI Act

Extraterritorial reach means your American startup is not exempt.

RegulaAI Team
2025-12-23
8 min read

The EU AI Act applies to ANY company selling AI products or services to EU customers. Learn how extraterritorial reach affects US businesses and what you need to do before August 2026.

If your AI product has even a single customer in the EU, you are subject to the EU AI Act. No exceptions.

This is not a hypothetical scenario. By August 2, 2026, every AI system classified as "high-risk" operating in the European Union must comply with the EU AI Act—regardless of where the company is headquartered.

For American startups and tech companies, this represents a fundamental shift in how you must approach AI development, deployment, and governance.

#The GDPR Playbook Repeats

Remember when GDPR went into effect in 2018? US companies scrambled to update privacy policies, implement cookie consent banners, and overhaul data handling practices. Many thought they could ignore it. They were wrong.

The EU AI Act follows the same extraterritorial logic as GDPR: If you serve EU customers, EU law applies to you.

Article 2 of the EU AI Act is explicit. The regulation applies to:

  • 1
    Providers placing AI systems on the EU market (even if based in the US)
  • 2
    Deployers of AI systems located in the EU
  • 3
    Providers and deployers of AI systems located outside the EU, where the output is used within the EU

Translation: If your SaaS chatbot is used by a company in Berlin, your AI-powered recruitment tool screens candidates in Paris, or your credit scoring algorithm serves customers in Madrid, you are in scope.

#What "High-Risk" Really Means

Not all AI systems face the same level of scrutiny. The EU AI Act establishes a risk-based framework with four categories:

Prohibited Systems (Banned entirely): - Social scoring by governments - Subliminal manipulation - Biometric categorization for sensitive characteristics - Real-time remote biometric identification in public spaces (with narrow exceptions)

High-Risk Systems (Strict requirements): - Employment and worker management (CV screening, promotion decisions) - Access to education and vocational training - Credit scoring and creditworthiness assessment - Law enforcement - Border control and migration - Administration of justice and democratic processes

Limited-Risk Systems (Transparency obligations): - Chatbots (must disclose they are AI) - Emotion recognition systems - Biometric categorization systems

Minimal-Risk Systems (No specific obligations): - AI-powered spam filters - AI-based video games - Inventory management systems

If your AI system falls into the "High-Risk" category, you face significant compliance obligations—and penalties for non-compliance.

#The August 2, 2026 Deadline

This is not a soft deadline. By August 2, 2026, all high-risk AI systems must be fully compliant. For US companies still in the "we'll deal with it later" mindset, time is running out.

The implementation timeline is staggered: - February 2, 2025: Prohibition of banned AI practices (already in effect) - August 2, 2025: Obligations for general-purpose AI models - August 2, 2026: Full obligations for high-risk AI systems - August 2, 2027: Obligations for certain high-risk AI systems already on the market

If you are building or deploying high-risk AI today, you have approximately 18 months to achieve full compliance.

#The Penalty Structure: Why This Matters

The EU does not issue warnings. They issue fines.

The penalty structure mirrors GDPR's severity: - Up to €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI practices - Up to €15 million or 3% of global annual turnover for violations of other obligations - Up to €7.5 million or 1.5% of global annual turnover for supplying incorrect information

For a US startup with $50 million in annual revenue, a single violation could result in a $3.5 million fine. For a company like OpenAI or Anthropic, the exposure is measured in hundreds of millions of dollars.

#How US State Laws Compare

The United States is taking a patchwork approach to AI regulation, with individual states leading the charge.

Colorado AI Act (2024): - Requires developers and deployers to use "reasonable care" to avoid algorithmic discrimination - Applies to high-risk AI systems used in consequential decisions - Focuses on employment, education, financial services, housing, insurance, and legal services - Enforcement begins February 1, 2026

California Proposed Laws: - AB 2930: Requires AI impact assessments for automated decision-making tools - SB 1047: Mandates safety testing for large-scale AI models - Multiple bills targeting deepfakes, biometric data, and AI in employment

The key difference: US state laws are less comprehensive and less coordinated than the EU AI Act. There is no federal AI regulation in the United States. Companies face a fragmented regulatory landscape with varying requirements across states.

The EU AI Act, by contrast, is a unified framework across all 27 member states. One set of rules. One compliance standard.

For US companies serving both domestic and EU markets, the strategic choice is clear: Build to the EU standard. It is the highest bar, and achieving EU compliance will satisfy most US state-level requirements.

#What US Companies Must Do Now

If you are a US-based company with EU customers (or plans to enter the EU market), here is your action plan:

### 1. Classify Your AI Systems Determine whether your AI falls into the prohibited, high-risk, limited-risk, or minimal-risk category. This is not optional. Misclassifying your system can result in enforcement action.

### 2. Conduct a Risk Assessment For high-risk systems, you must document: - The intended purpose and context of use - Data sources and training methodologies - Known limitations and risks - Measures to mitigate bias and ensure accuracy - Human oversight mechanisms

### 3. Implement Technical Documentation The EU AI Act requires extensive documentation, including: - A detailed description of the AI system and its components - Instructions for use - Risk management processes - Data governance and management practices - Metrics used to measure accuracy, robustness, and cybersecurity

This is not a one-page PDF. Expect 20-50 pages of technical documentation per AI system.

### 4. Establish Human Oversight High-risk AI systems must enable effective human oversight. This means: - Humans can understand the AI's outputs - Humans can override AI decisions - Humans can intervene to stop the AI system

A "human-in-the-loop" that rubber-stamps AI decisions in 1 second does not satisfy this requirement.

### 5. Ensure Transparency and Explainability Users must be informed when they are interacting with an AI system. For high-risk systems, you must be able to explain how the AI reached its decision.

"Black box" models are a compliance risk. If you cannot explain why your AI rejected a job applicant or denied a loan, you are liable.

### 6. Monitor for Bias and Discrimination You must conduct ongoing testing to ensure your AI system does not produce discriminatory outcomes based on race, gender, religion, disability, or other protected characteristics.

This requires: - Diverse and representative training data - Bias detection tools (e.g., fairness metrics, SHAP/LIME) - Regular audits and retraining

### 7. Prepare for Incident Reporting If your AI system causes harm, you have 72 hours to report the incident to the relevant authorities. Draft your incident response plan now, not after a breach occurs.

#The Strategic Opportunity

Compliance is not just a cost. It is a competitive advantage.

EU enterprises will not buy AI products from non-compliant vendors. If your competitors achieve EU AI Act compliance and you do not, you lose access to the world's second-largest economy.

By contrast, US companies that proactively achieve compliance can: - Market themselves as "EU AI Act Certified" - Win enterprise contracts in Europe - Differentiate from non-compliant competitors - Build trust with privacy-conscious customers globally

#How RegulaAI Helps US Companies

At RegulaAI, we help US companies navigate EU AI Act compliance without hiring expensive law firms or consultants.

Our platform provides: - Automated risk classification: Determine if your AI system is high-risk in minutes - Compliance checklists: Step-by-step guidance based on official EU guidelines - Professional audit reports: Generate documentation ready for regulatory review - Ongoing monitoring: Stay updated as regulations evolve

Whether you are a startup in San Francisco or an enterprise in New York, if you serve EU customers, we can help you achieve compliance before the August 2026 deadline.

#Conclusion: Act Now or Pay Later

The EU AI Act is not going away. It is not a suggestion. It is the law.

US companies that ignore it will face the same fate as those who ignored GDPR: emergency compliance sprints, expensive consultants, and in some cases, multi-million-dollar fines.

The smart move is to start now. Classify your AI systems. Conduct a risk assessment. Build compliance into your development process.

The August 2, 2026 deadline is closer than you think.

Ready to start your compliance journey? [Begin your free AI risk assessment](#) and find out if your system is high-risk in under 10 minutes.

Share Article

Avoid AI Fines.

The EU AI Act is real. Your compliance should be too. Get your initial audit in minutes.