AI Act Obligations by Article
Complete guide to the obligations of Regulation (EU) 2024/1689. Identify the requirements applicable to your role (provider, deployer, importer) and your AI system's risk level.
Filter by role:
Frequently asked questions about AI Act obligations
Find answers to the most common questions about obligations imposed by the European AI Regulation.
The AI Act (Regulation (EU) 2024/1689) imposes differentiated obligations based on the risk level of the AI system. Unacceptable-risk practices are outright prohibited (Article 5). High-risk systems must meet strict requirements: technical documentation, risk management, data governance, human oversight, and cybersecurity. Limited-risk systems are subject to transparency obligations (Article 50). All operators must also ensure AI literacy among their staff (Article 4).
The AI Act applies to four categories of operators: providers (who develop or place an AI system on the market), deployers (who use an AI system under their own authority), importers (who introduce into the EU an AI system from a non-EU provider), and distributors (who make an AI system available on the European market). Obligations vary depending on the operator's role and the risk level of the system concerned.
The provider is the entity that develops an AI system or has it developed, and places it on the market or puts it into service under its own name or trademark. The deployer is any person who uses an AI system under their own authority in a professional context. The same organization can hold both roles depending on the systems involved. Providers bear heavier obligations, including technical documentation and conformity assessment prior to market placement.
High-risk AI systems must meet a comprehensive set of regulatory requirements: risk management system (Art. 9), training data governance (Art. 10), detailed technical documentation (Art. 11), automatic event logging (Art. 12), transparency and user information (Art. 13), human oversight measures (Art. 14), and guarantees of accuracy, robustness, and cybersecurity (Art. 15). The provider must also establish a quality management system.
The AI Act entered into force on 1 August 2024 with a phased implementation. Prohibited practices and the AI literacy obligation have applied since 2 February 2025. Rules for GPAI models have applied since 2 August 2025. Transparency obligations (Art. 50) and high-risk systems under Annex III will apply from 2 August 2026. Systems covered by Annex I (regulated products) will follow on 2 August 2027.
Penalties are proportionate to the severity of the infringement: up to EUR 35 million or 7% of annual worldwide turnover for prohibited practices, up to EUR 15 million or 3% for non-compliance with other regulatory obligations, and up to EUR 7.5 million or 1% for providing inaccurate information to authorities. For SMEs and startups, amounts are adjusted to the lower of the two thresholds.
Yes, providers of GPAI models (such as large language models) have had specific obligations since 2 August 2025: preparing technical documentation, complying with EU copyright law, and publishing a summary of training data. GPAI models posing systemic risk have additional obligations: thorough model evaluation, systemic risk mitigation, serious incident reporting, and adequate cybersecurity levels.
An AI system is classified as high-risk in two cases: if it is a safety component of a product covered by EU harmonization legislation (Annex I: medical devices, machinery, toys, etc.), or if it falls within the sensitive domains listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). Our free diagnostic tool lets you identify the applicable risk level for your system in just a few minutes.
Identify your obligations in 3 minutes
Our free diagnostic analyzes your AI system and tells you exactly which obligations apply to you.
Start free diagnostic