AI Act: What Really Changes on August 2, 2026

Key takeaways
- Still the legal deadline: August 2, 2026 remains the binding date for high-risk systems (Annex III) and transparency (Art. 50), even as the Digital Omnibus is being adopted
- 8 high-risk sectors: biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice and border management
- Providers: Annex IV technical documentation, risk management (Art. 9), CE marking, EU database registration
- Deployers: human oversight, log retention, informing affected individuals, fundamental rights impact assessment (public sector)
- Transparency for all: chatbots, generative AI and deepfakes subject to disclosure obligations regardless of risk level
- Penalties: up to €15 million or 3% of global annual turnover for non-compliance with high-risk system obligations
In less than five months, the European regulatory landscape shifts decisively. On August 2, 2026, the AI Act (Regulation EU 2024/1689) reaches its most structural milestone: obligations for high-risk AI systems enter into force, and transparency rules become fully enforceable. For millions of European organisations, this is no longer a question of preparation — it is a question of compliance.
One essential clarification upfront: the Digital Omnibus currently being adopted by the European Parliament could delay some of these obligations to December 2027. But until it is formally adopted, August 2, 2026 remains the legally binding date. This article covers what the law requires today.
What the AI Act Has Already Imposed Before August 2026
To understand what changes in August, it helps to recall what already applies. The AI Act did not enter into force in one block — it has been rolling out progressively since August 2024:
- February 2, 2025: prohibition of unacceptable-risk practices (Article 5) — social scoring, subliminal manipulation, certain real-time biometric uses. These rules are already in force and enforceable.
- February 2, 2025: AI literacy obligation (Article 4) — providers and deployers must ensure their staff have a sufficient level of AI competence.
- August 2, 2025: obligations for general-purpose AI models (GPAI, Articles 51–56) — already applicable to providers of models such as LLMs.
This phased calendar is deliberate: it gives organisations time to adapt to the most complex obligations. But the time remaining before August 2026 is now measured in weeks.
What Enters Into Force on August 2, 2026
High-risk AI systems (Annex III)
This is the heart of the August 2026 change. Annex III lists eight domains in which AI systems are automatically classified as high-risk:
- Biometrics: remote biometric identification, biometric categorisation, emotion recognition
- Critical infrastructure: management of electricity, water, gas and transport networks
- Education and training: systems determining access to institutions, student assessment
- Employment and HR: recruitment tools, performance management, candidate evaluation
- Essential services: credit scoring, insurance, social benefit assessment
- Law enforcement: criminal profiling, evidence reliability assessment, recidivism risk evaluation
- Justice and democratic processes: judicial decision support
- Border management: risk assessment of individuals at borders, document verification
If your organisation develops or uses an AI system in any of these domains, the Chapter III obligations of the AI Act apply to you from August 2, 2026 — whether the tool is developed in-house or sourced from a SaaS provider.
Transparency obligations for everyone (Article 50)
Article 50 does not only concern high-risk systems — it applies to all AI systems that interact with humans or generate synthetic content:
- Chatbots and conversational agents (Art. 50§1): any system interacting with a human must clearly inform them they are talking to an AI. Exceptions: professional contexts where this is obvious, or legitimate fictional character use cases.
- Deepfakes and manipulated content (Art. 50§4): audio or visual content manipulated to resemble real people must be clearly identified as such.
- Emotion recognition (Art. 50§3): any person subject to an emotion recognition system must be informed.
Important note: Article 50§2 (machine-readable marking of AI-generated content) is the only element potentially shifted — to November 2, 2026 under Parliament's Digital Omnibus position. All other Article 50 obligations remain at August 2026.
What This Means for Providers
If you develop and commercialise a high-risk AI system, you are a provider under Article 3. Your obligations from August 2026 are substantial:
- Technical documentation (Art. 11 + Annex IV): a complete file covering system description, architecture, data governance, risk management, performance metrics, instructions for use, post-market monitoring, quality management and declaration of conformity.
- Risk management system (Art. 9): a continuous process of identifying, evaluating and mitigating risks throughout the system's lifecycle.
- Data governance (Art. 10): training datasets must be documented, representative and subject to bias analysis.
- Human oversight (Art. 14): the system must be designed to allow a human to monitor, understand and intervene in its operation.
- CE marking + declaration of conformity (Art. 47–49): formal attestation of compliance before placing on the market.
- EU database registration (Art. 71): mandatory registration before deployment for most high-risk systems.
Preparing this documentation is estimated to take between 40 and 80 hours for a complex system. AiActo structures this process section by section with a contextual AI assistant, significantly reducing that preparation time.
What This Means for Deployers
If you use a high-risk AI system developed by a third party, you are a deployer under Article 3. Your obligations are distinct but equally real:
- Effective human oversight (Art. 26§2): you must put in place procedures allowing a qualified person to monitor the system, interpret its outputs and intervene or stop it if necessary.
- Log retention (Art. 26§6): automatically generated logs must be retained for at least six months.
- Informing affected individuals (Art. 26§6 + Art. 50): any person subject to a decision influenced by a high-risk system must be informed.
- Use in line with provider instructions (Art. 26§1): you cannot repurpose the system beyond its intended use without assuming provider-level responsibility.
- Fundamental rights impact assessment — FRIA (Art. 27): mandatory for public bodies and certain essential services operators before deployment.
The Penalties That Come Into Play
The AI Act establishes a progressive and dissuasive penalty regime, enforceable from August 2026:
- Up to €35 million or 7% of global turnover for prohibited practices (Article 5) — already enforceable since February 2025
- Up to €15 million or 3% of global turnover for non-compliance with high-risk system obligations
- Up to €7.5 million or 1% of global turnover for providing inaccurate information to authorities
For SMEs, the regulation requires authorities to account for organisational size and available resources. But there is no blanket exemption — an SME may benefit from proportionate enforcement, not from a waiver.
What You Need to Have Done Before August 2, 2026
- Classify your AI systems — identify which fall under Annex III, which are subject to Article 50, and which are minimal risk. This is the prerequisite for every other step.
- Determine your role — provider, deployer, or both? Your role determines the entirety of your obligations.
- Audit your SaaS vendors — ask your AI providers whether they are compliant, whether they have technical documentation and whether they are registered in the EU database.
- Build your documentation file — even as a deployer, you must be able to demonstrate your human oversight processes and control procedures.
- Inform your users — update your legal notices, interfaces and communications to meet Article 50 transparency obligations.
The free AiActo diagnostic lets you complete steps 1 and 2 in under 3 minutes — and get a personalised summary of your obligations before the deadline.
Frequently Asked Questions
What happens if the Digital Omnibus is adopted before August 2026?
If the Digital Omnibus is adopted before August 2, 2026 — which is the stated objective of the EU institutions — obligations for Annex III high-risk systems would be delayed to December 2, 2027. The European Parliament's position adopted on March 18, 2026 supports this delay. But until the text is formally published in the EU Official Journal, August 2, 2026 remains the legal deadline.
My AI system was already deployed before August 2026 — am I exempt?
Partially. Article 111 provides that systems already on the market before the obligations enter into force benefit from a transition period — provided they undergo no significant modification. If your system evolves, the new obligations apply. And this exemption is itself tied to dates that may shift with the Digital Omnibus.
How do I know if my system is high-risk?
Classification rests on two criteria: the domain of application (Annex III) and the nature of the system. An AI-powered CV screening tool is high-risk by definition. A marketing text generator generally is not. When in doubt, the AiActo diagnostic applies Annex III logic to your specific case in a few questions.
Do obligations apply to AI tools used only internally?
Yes, provided the system falls under Annex III and the individuals affected by its decisions are people — employees, candidates, clients. An AI-powered employee performance management tool falls within the AI Act's scope even if it is never commercialised.
What is the difference between August 2026 and August 2027?
August 2026: Annex III high-risk systems and transparency obligations (Art. 50). August 2027: high-risk systems embedded in products regulated by other EU legislation (Annex I — medical devices, machinery, radio equipment). The latter benefit from additional time because their compliance involves more complex sectoral certification procedures.
August 2, 2026 is not an abstract threat — it is a date inscribed in a text published in the Official Journal of the European Union. Whatever the outcome of the Digital Omnibus, organisations that have structured their compliance before that date will be better protected, better documented, and better positioned in a market where trust in AI is becoming a competitive advantage. Check the complete AI Act timeline to visualise all steps in the regulatory calendar.
