Everything you need to know about the EU AI Act: from high-risk AI systems to penalties, deadlines, and how to comply. Your complete guide to AI regulation in Europe.
The EU AI Act is the world's first comprehensive AI regulation, and it is already reshaping how businesses develop and deploy AI systems across Europe. Whether you are a startup founder, compliance officer, or CTO, understanding the AI Act is no longer optional—it is essential.
This FAQ addresses the 25 most common questions about the EU AI Act, covering everything from basic definitions to practical compliance strategies.
#1. What is the EU AI Act?
The EU AI Act (officially the "Regulation on Artificial Intelligence") is a comprehensive legal framework adopted by the European Union to regulate artificial intelligence systems. It was formally approved in March 2024 and began its phased implementation immediately.
The Act takes a risk-based approach, classifying AI systems into four categories: prohibited practices, high-risk systems, limited-risk systems, and minimal-risk systems. Each category has different compliance requirements, with the strictest rules applying to high-risk AI systems that could impact fundamental rights, safety, or democratic processes.
Unlike voluntary guidelines, the AI Act is legally binding across all 27 EU member states. It applies not only to companies headquartered in the EU but also to any organization whose AI systems affect people within the European Union—making it a de facto global standard similar to GDPR.
The regulation aims to balance innovation with safety, ensuring AI systems are transparent, accountable, and respect fundamental rights.
#2. When does the EU AI Act take effect?
The EU AI Act follows a staggered implementation timeline with multiple critical deadlines:
- February 2, 2025: Ban on prohibited AI practices (such as social scoring and emotion recognition in workplaces) - August 2, 2025: Obligations for general-purpose AI models like GPT-4, Claude, and Gemini - August 2, 2026: Full compliance required for high-risk AI systems - August 2, 2027: Compliance required for high-risk AI in existing products (grace period for legacy systems)
The most critical deadline for most businesses is August 2, 2026, when all high-risk AI systems must be fully compliant. Companies deploying AI in recruitment, credit scoring, education, or healthcare should prioritize this deadline.
Missing these deadlines is not just a regulatory risk—it is a business continuity risk. Non-compliant systems must be withdrawn from the market.
#3. Who does the EU AI Act apply to?
The AI Act has extraterritorial reach, meaning it applies far beyond EU borders. The regulation covers:
1. Providers: Organizations that develop, manufacture, or substantially modify AI systems and place them on the EU market—regardless of where the provider is located.
2. Deployers: Organizations that use AI systems under their own authority within the EU. If you are a US company using an AI recruitment tool in your Paris office, you are a deployer.
3. Importers and Distributors: Entities that make AI systems available in the EU market.
4. Product Manufacturers: Companies integrating AI as a safety component of products sold in the EU.
In practice, this means: If your AI system is used by or affects people in the EU, you must comply—even if your company has no physical presence in Europe. This mirrors the GDPR model and makes the AI Act a global regulatory force.
#4. What are high-risk AI systems?
High-risk AI systems are defined in Annex III of the AI Act. They include AI used in contexts where errors could significantly impact fundamental rights, safety, or well-being. The eight high-risk categories are:
1. Biometric Identification: Real-time facial recognition in public spaces 2. Critical Infrastructure: AI managing water, gas, electricity, or transport networks 3. Education and Training: AI that determines access to education or evaluates students 4. Employment: Resume screening, hiring decisions, promotion algorithms, worker monitoring 5. Essential Services: Credit scoring, insurance underwriting, emergency services dispatch 6. Law Enforcement: Predictive policing, crime risk assessment, lie detection 7. Migration and Border Control: Visa processing, asylum decisions, border security 8. Justice and Democracy: AI assisting judicial decisions or influencing elections
If your AI system falls into any of these categories, you face the strictest compliance requirements: documentation, risk management, human oversight, transparency, and ongoing monitoring.
Even seemingly simple tools—like a chatbot that pre-screens job applicants—can be classified as high-risk if they influence hiring decisions.
#5. What are prohibited AI practices?
Certain AI applications are banned outright under the AI Act because they pose unacceptable risks to fundamental rights. Prohibited practices include:
1. Social Scoring: Government-run systems that rate citizens based on behavior (think China's social credit system)
2. Exploiting Vulnerabilities: AI that exploits vulnerabilities of children, elderly, or disabled persons to manipulate behavior
3. Subliminal Manipulation: AI designed to subconsciously manipulate people into harmful actions
4. Real-Time Biometric Surveillance: Public facial recognition in real-time (with narrow law enforcement exceptions)
5. Emotion Recognition in Workplaces and Schools: Using AI to detect employee or student emotions (banned except for medical/safety reasons)
6. Predictive Policing Based Solely on Profiling: Predicting criminality based on personal characteristics without individualized suspicion
These practices are banned from February 2, 2025. Companies deploying these systems face immediate fines and must cease operations.
#6. Does the EU AI Act apply to US companies?
Yes. The AI Act applies to any AI provider or deployer whose systems are used within the EU—regardless of where the company is headquartered.
If you are a Silicon Valley startup and your AI recruitment tool is used by a company in Germany, you are subject to the AI Act. If your credit scoring algorithm affects French consumers, you must comply.
This extraterritorial application mirrors GDPR. The EU designed the AI Act to protect its citizens regardless of where the technology originates. In practice, this makes the AI Act a de facto global standard—similar to how GDPR became the baseline for global privacy laws.
US companies cannot avoid compliance by claiming they are "only" US-based. If you serve EU customers, you must comply or exit the EU market.
#7. What are the penalties for non-compliance?
The AI Act imposes some of the highest administrative fines in regulatory history:
- €35 million or 7% of global annual turnover (whichever is higher) for deploying prohibited AI practices - €15 million or 3% of global annual turnover for violations of high-risk AI obligations - €7.5 million or 1.5% of global annual turnover for providing incorrect information to authorities
For context, 7% of global revenue for a mid-sized SaaS company generating €50 million annually would be a €3.5 million fine. For a large enterprise generating €1 billion, the fine could reach €70 million.
Beyond fines, non-compliance can result in: product recalls, market bans, criminal liability for executives (in some member states), and reputational damage that destroys customer trust.
The regulatory risk is existential. Companies cannot afford to treat AI Act compliance as optional.
#8. What is a "deployer" vs "provider"?
The AI Act distinguishes between two primary roles:
Provider: The entity that develops, trains, or substantially modifies an AI system and places it on the market. Providers bear the primary responsibility for ensuring the AI system meets regulatory requirements before deployment. They must conduct conformity assessments, maintain technical documentation, and implement risk management systems.
Deployer: The entity that uses an AI system under its own authority. For example, if a bank uses a third-party credit scoring algorithm, the bank is the deployer. Deployers must ensure the AI is used correctly, monitor for bias or errors, provide human oversight, and maintain logs.
This distinction matters because both parties share compliance obligations—but different ones. A deployer cannot simply blame the provider if the AI causes harm. Deployers must conduct their own impact assessments and ensure the system aligns with the AI Act's requirements in practice.
#9. Does ChatGPT/OpenAI need to comply with the EU AI Act?
Yes—but in a specific way. Foundation models like GPT-4, Claude, and Gemini are classified as General-Purpose AI (GPAI) models under the AI Act.
Providers of GPAI models must: - Publish detailed technical documentation - Maintain copyright compliance for training data - Implement systemic risk management (for models with "systemic risk" like GPT-4) - Provide transparency about training data sources and capabilities
However, deployers of these models (companies building applications on top of GPT-4 or Claude) bear separate obligations. If you wrap GPT-4 into a recruitment tool, you become the provider of a high-risk AI system—and you inherit full compliance responsibility.
OpenAI, Anthropic, and Google must comply with GPAI obligations by August 2, 2025. But if you use their APIs, you cannot claim "OpenAI handles compliance." You are responsible for your specific use case.
#10. How do I know if my AI is high-risk?
Determining risk classification is one of the most critical compliance decisions. Follow this decision tree:
Step 1: Does your AI fall into one of the eight high-risk use cases? - Employment (hiring, promotion, firing, monitoring) - Credit/insurance scoring - Education (admissions, grading) - Law enforcement - Critical infrastructure - Biometrics - Migration/asylum - Justice/democracy
Step 2: Does the AI system make or significantly influence consequential decisions? A high-risk AI must have meaningful impact. A simple calculator is not high-risk even if used in employment. But an algorithm that ranks candidates or filters resumes is high-risk.
Step 3: Are there human overrides? If a human always makes the final decision and can easily disregard the AI, the risk may be reduced—but this depends on how much influence the AI has in practice.
If you are uncertain, consult Annex III of the AI Act or conduct a formal risk classification assessment. Getting this wrong has severe consequences—classifying a high-risk system as low-risk exposes you to penalties.
#11. What documentation is required for high-risk AI?
High-risk AI systems require extensive technical documentation that must be maintained throughout the system's lifecycle. Required documentation includes:
1. System Card: - Description of the AI system's purpose and intended use - Capabilities and limitations - Training data sources and preprocessing methods - Model architecture and development methodology
2. Risk Management System: - Identification of known risks (bias, privacy, safety) - Mitigation measures implemented - Residual risks and user warnings - Testing and validation procedures
3. Data Governance: - Training, validation, and testing dataset descriptions - Data quality metrics - Bias detection and mitigation strategies - Data lineage (where data came from)
4. Human Oversight Measures: - Description of human intervention mechanisms - Training requirements for human operators - Interface design for oversight
5. Transparency Information: - How the system works in plain language - User notification requirements - Limitations and warnings
6. Conformity Assessment Report: - Evidence that the system meets AI Act requirements - Test results and audit trails
This documentation must be continuously updated as the system evolves. It is not a one-time compliance exercise—it is an ongoing obligation.
#12. Is my chatbot covered by the EU AI Act?
It depends on what your chatbot does. The AI Act applies a functional, risk-based approach—so identical technologies can have different classifications.
Minimal Risk (No special obligations): - General customer service chatbots that answer FAQs - Chatbots that book appointments or process simple queries - Entertainment chatbots
Limited Risk (Transparency obligations only): - Chatbots that users might mistake for humans (must disclose it is AI) - Chatbots generating synthetic content
High Risk (Full compliance required): - Chatbots conducting initial job screenings or candidate evaluations - Chatbots providing legal, medical, or financial advice - Chatbots used in education to evaluate students - Chatbots handling asylum or visa applications
The key question: Does the chatbot influence consequential decisions affecting rights, safety, or access to services? If yes, it is likely high-risk.
Even a simple recruitment chatbot that filters candidates based on qualifications can be high-risk if it materially affects hiring outcomes.
#13. What about GDPR and AI Act overlap?
The AI Act and GDPR are complementary but distinct regulations. Both apply simultaneously—you must comply with both.
Key Overlaps:
1. Personal Data Processing: If your AI processes personal data, GDPR applies. This includes requirements for lawful basis, data minimization, storage limitations, and individual rights (access, rectification, erasure).
2. Automated Decision-Making: GDPR Article 22 restricts automated decisions with legal or similarly significant effects. The AI Act goes further by requiring risk assessments and human oversight for high-risk systems.
3. Data Protection Impact Assessments (DPIA): High-risk AI systems processing personal data require both a DPIA (under GDPR) and a Fundamental Rights Impact Assessment (under AI Act).
4. Transparency: Both regulations require transparency—but the AI Act requires more detailed explanations of how AI systems work.
Where They Differ: - GDPR focuses on personal data and privacy - AI Act focuses on safety, fundamental rights, and algorithmic accountability - GDPR applies to all personal data processing; AI Act applies to specific AI use cases
In practice, your compliance program must address both regulations. An AI system can be GDPR-compliant but still violate the AI Act if it lacks human oversight or proper risk management.
#14. How long do I have to comply?
The timeline depends on your AI system's classification:
Prohibited AI Practices: Banned since February 2, 2025. Immediate cessation required.
General-Purpose AI Models: Compliance required by August 2, 2025. This includes foundation models like GPT, Claude, and custom large language models.
High-Risk AI Systems (New Products): Full compliance required by August 2, 2026. This is the critical deadline for most businesses.
High-Risk AI Systems (Existing Products): Grace period until August 2, 2027 for AI systems already on the market before the AI Act entered into force.
Practical Recommendation: Do not wait until the deadline. Compliance is a multi-month process involving documentation, risk assessments, system redesigns, and third-party audits. Starting in 2025 for a 2026 deadline is cutting it dangerously close.
Aim to achieve compliance 6-12 months before your deadline to account for unexpected issues, regulatory clarifications, and iterative improvements.
#15. What is an AI audit?
An AI audit is a systematic evaluation of your AI system to ensure it meets regulatory, ethical, and technical standards. Under the AI Act, high-risk systems require conformity assessments—essentially, formal audits proving compliance.
What an AI audit includes:
1. Risk Classification: Determine whether your AI is high-risk, limited-risk, or minimal-risk.
2. Technical Evaluation: - Review model architecture and training data - Test for bias, fairness, and accuracy - Validate input/output integrity
3. Documentation Review: - Verify completeness of technical documentation - Check risk management procedures - Confirm data governance practices
4. Governance Assessment: - Evaluate human oversight mechanisms - Review incident response procedures - Assess organizational accountability structures
5. Legal Compliance: - Confirm adherence to AI Act requirements - Verify GDPR compliance - Check national regulatory alignment (e.g., AEPD in Spain)
Who conducts audits? For high-risk AI, you may need a Notified Body—an independent third-party auditor authorized by EU member states. For internal audits, you can use automated tools (like RegulaAI) or hire specialized consultancies.
Regular audits are not optional—they are a legal requirement for high-risk systems and a best practice for all AI deployments.
#16. What is required for human oversight?
Article 14 of the AI Act mandates effective human oversight for high-risk AI systems. This means a qualified human must be able to:
1. Understand the System: Operators must comprehend the AI's capabilities, limitations, and error modes. They cannot be black-box users.
2. Monitor Operation: Real-time or near-real-time monitoring to detect anomalies, bias drift, or harmful outputs.
3. Intervene and Override: The human must have the technical ability and authority to disregard, modify, or halt AI outputs. A "confirm" button is not enough—there must be genuine decision-making power.
4. Stop the System: A "kill switch" or emergency shutdown mechanism must be available and easily accessible.
Practical Implementation:
- Design Friction: Intentionally slow down automated workflows for critical decisions. If a human approves 99.9% of AI recommendations in under 1 second, regulators will view this as "automation bias," not oversight.
- Explainability Interfaces: Provide confidence scores, feature importance, and counterfactual explanations. Humans cannot oversee what they do not understand.
- Training Programs: Operators must receive training on the AI system's limitations and their oversight responsibilities.
"Human-in-the-loop" is not a checkbox—it is a fundamental design principle that requires organizational commitment.
#17. Do I need to disclose when I use AI?
Yes, but the extent of disclosure depends on risk classification:
High-Risk AI Systems: You must inform users that they are interacting with or subject to decisions made by a high-risk AI system. This includes explaining: - The system's purpose - How it works (in plain language) - The human oversight mechanisms in place - How users can challenge decisions
Limited-Risk AI (e.g., Chatbots, Deepfakes): You must disclose that content is AI-generated or that users are interacting with an AI system. For example, a chatbot must identify itself as AI if users might mistake it for a human.
Emotion Recognition and Biometric Systems: Users must be explicitly informed before their emotions or biometric data are processed.
Minimal-Risk AI: No mandatory disclosure, but transparency is a best practice for building trust.
Failure to disclose AI use—especially for high-risk systems—is a compliance violation and can result in fines.
#18. How does the AI Act affect startups?
The AI Act presents both challenges and opportunities for startups:
Challenges:
1. Compliance Costs: High-risk AI compliance requires documentation, testing, audits, and potentially Notified Body assessments. For early-stage startups, this can be resource-intensive.
2. Speed to Market: Startups must balance rapid iteration with compliance obligations. Launching a high-risk AI system without proper documentation can result in market bans.
3. Investor Expectations: VCs now demand AI compliance evidence during due diligence. Non-compliant startups face valuation discounts or rejected term sheets.
Opportunities:
1. Regulatory Sandboxes: Countries like Spain offer AI Regulatory Sandboxes where startups can test systems under regulatory supervision without immediate penalty risk. This is a fast track to compliance and a trust signal for enterprise customers.
2. Compliance as Competitive Advantage: Compliant startups can access markets that non-compliant competitors cannot. Enterprise clients (banks, healthcare, government) will only buy from compliant vendors.
3. Premium Positioning: "EU AI Act Certified" is a powerful marketing differentiator, especially for B2B SaaS targeting risk-averse industries.
Startup Strategy: Build compliance into your product from day one. Retrofitting compliance is exponentially more expensive than designing for it upfront. Use automated tools like RegulaAI to reduce manual compliance burden.
#19. What is a conformity assessment?
A conformity assessment is the formal process of proving that a high-risk AI system meets all AI Act requirements. Think of it as a certification audit.
Internal Conformity Assessment (Self-Assessment): For most high-risk systems, providers can conduct internal assessments if they have implemented a quality management system and maintained proper documentation.
Third-Party Conformity Assessment (Notified Body): For specific high-risk categories (e.g., biometric identification), an independent Notified Body—authorized by an EU member state—must conduct the assessment.
What the assessment covers: - Risk management system adequacy - Data governance and quality - Technical documentation completeness - Human oversight implementation - Transparency and logging mechanisms - Accuracy, robustness, and cybersecurity
Outcome: If the assessment is successful, the provider issues an EU Declaration of Conformity and affixes the CE marking to the AI system. This allows the system to be placed on the EU market.
Conformity assessments are not one-time events—they must be repeated if the AI system undergoes substantial modifications.
#20. Can I use open-source AI models?
Yes, but with important caveats. Open-source models (like Llama, BERT, Stable Diffusion) are subject to the AI Act if you deploy them in the EU.
If you use an open-source foundation model: You may be considered the provider if you substantially modify or fine-tune the model. This means full compliance obligations apply.
If you deploy an unmodified open-source model: You are likely a deployer, with obligations to ensure proper use, human oversight, and risk management.
Key Risks:
1. Lack of Documentation: Many open-source models lack the detailed technical documentation required by the AI Act. You may need to generate this yourself.
2. Unknown Training Data: If you cannot prove the training data was unbiased and lawfully collected, you may fail compliance.
3. No Support: Unlike commercial models, open-source models often lack vendor support for compliance issues.
Best Practice: If you use open-source AI, treat it as if you built it yourself. Conduct your own risk assessments, document thoroughly, and implement human oversight. Do not assume "open-source = compliant."
#21. What about AI used internally (not customer-facing)?
The AI Act applies to both customer-facing and internal AI systems—if they fall into high-risk categories.
Examples of high-risk internal AI: - Employee monitoring systems tracking productivity or behavior - AI-powered hiring tools used by your HR department - Internal fraud detection systems that flag employees - Promotion or performance evaluation algorithms
Even if your AI never touches customers, if it affects employees' rights or job prospects, it is high-risk and requires full compliance.
Exception: AI used solely for internal R&D, testing, or development (and never deployed to make real decisions) is generally not regulated—until it is put into operational use.
Practical Implication: HR departments using AI recruitment tools must comply with the AI Act, even if the tools are only used internally. The regulation protects employees, not just external users.
#22. How do I handle AI incidents and errors?
The AI Act requires incident reporting for serious malfunctions or breaches that affect fundamental rights, health, or safety.
What qualifies as an incident: - Discriminatory outputs causing harm (e.g., biased hiring decisions) - Privacy breaches (e.g., model leaking training data) - Safety failures (e.g., autonomous vehicle crashes) - Security breaches (e.g., adversarial attacks)
Reporting Timeline: You have 72 hours to notify the relevant authorities (in Spain, this is the AEPD and potentially the AESIA).
What to report: 1. Description of the incident 2. Root cause analysis 3. Number of affected individuals 4. Immediate mitigation measures 5. Long-term corrective actions
Preparation Steps: - Draft incident response templates now - Designate a compliance officer responsible for incident reporting - Conduct "fire drill" simulations to test your response process
Failing to report a serious incident is a compliance violation that compounds the original problem.
#23. What is the AI Office and how does it enforce compliance?
The European AI Office is the central EU body responsible for overseeing AI Act implementation and enforcement. It coordinates national regulators, provides guidance, and supervises general-purpose AI models.
Key Functions: - Issue implementation guidelines and standards - Supervise very large general-purpose AI providers (OpenAI, Anthropic, Google) - Coordinate cross-border enforcement - Maintain a public database of high-risk AI systems
National Enforcement: Day-to-day enforcement is handled by national competent authorities in each member state. In Spain, this is the AEPD (for data-related issues) and AESIA (for AI-specific supervision).
These authorities have the power to: - Conduct audits and inspections - Demand documentation and access to systems - Issue fines and corrective orders - Ban non-compliant systems from the market
What this means for you: Expect increased regulatory scrutiny. The AI Office will publish "priority sectors" for enforcement. Companies in those sectors should expect audits.
#24. How can I prepare for compliance now?
Start immediately. Compliance is not a one-day project—it is a multi-month organizational transformation. Follow this roadmap:
Phase 1: Assessment (Month 1-2) - Inventory all AI systems in your organization - Classify each system by risk level - Identify gaps between current state and AI Act requirements
Phase 2: Documentation (Month 2-4) - Create technical documentation for high-risk systems - Develop risk management frameworks - Document data governance practices - Draft transparency notices and user disclosures
Phase 3: Implementation (Month 4-8) - Redesign systems to enable human oversight - Implement logging and monitoring infrastructure - Conduct bias testing and mitigation - Train staff on compliance obligations
Phase 4: Validation (Month 8-10) - Conduct internal audits - Engage Notified Bodies if required - Perform conformity assessments - Issue EU Declarations of Conformity
Phase 5: Ongoing Monitoring (Continuous) - Monitor for bias drift and performance degradation - Update documentation as systems evolve - Conduct regular compliance reviews - Stay updated on regulatory guidance
Tools to accelerate compliance: Platforms like RegulaAI automate risk assessments, generate documentation templates, and track compliance status—reducing the manual burden by 70-80%.
#25. Where can I get help with EU AI Act compliance?
Compliance resources are expanding rapidly. Here are your options:
1. Automated Compliance Platforms: Tools like RegulaAI provide: - Risk classification assessments - Automated documentation generation - Compliance checklists and gap analysis - Real-time regulatory updates
2. Legal Consultancies: Specialized AI law firms offer compliance advisory, but costs can range from €15,000-€50,000+ for comprehensive audits.
3. Regulatory Sandboxes: Spain, France, and other member states offer AI Regulatory Sandboxes where you can test compliance under regulatory supervision. This provides direct access to regulator guidance.
4. Industry Associations: Join AI trade groups and industry consortia that publish best practices and advocate for practical implementation.
5. Official Resources: - European AI Office guidance documents - National regulator websites (AEPD for Spain, CNIL for France, etc.) - ISO/IEC standards for AI management (ISO 42001)
6. Training and Certification: Invest in AI governance training for your team. Certifications like Certified AI Governance Professional (CAIGP) are emerging.
Start with RegulaAI: Our platform offers a free 8-question risk assessment that provides immediate clarity on whether your AI is high-risk and what steps you need to take. For comprehensive compliance, our AEPD-aligned checklist covers 100+ controls and generates professional audit reports.
#Conclusion: Compliance is Not Optional
The EU AI Act is not a distant regulatory threat—it is here, and the deadlines are approaching fast. For businesses deploying AI in recruitment, credit scoring, healthcare, education, or any high-risk context, compliance is mandatory.
The good news: Compliance does not have to be overwhelming. With the right tools, frameworks, and planning, you can achieve AI Act compliance while maintaining innovation velocity.
Your next steps: 1. Take the free RegulaAI risk assessment to determine if your AI is high-risk 2. Inventory your AI systems and classify them by risk 3. Start building compliance documentation now—do not wait for the deadline 4. Invest in automated compliance tools to reduce manual effort 5. Engage with regulators through sandboxes or guidance consultations
The companies that treat AI compliance as a strategic advantage—not a bureaucratic burden—will win in the new AI economy. Do not wait for a fine to take action.
Ready to start your compliance journey?
[Take the Free 8-Question Risk Assessment →](/questionnaire)
The assessment takes 5 minutes and provides immediate clarity on your AI Act obligations. Start today—your August 2, 2026 deadline is closer than you think.
Compartir Artículo