ai act août 2026systèmes ia haut risqueobligations ai actconformité ai act 2026annexe iii ai acttransparence ia

AI Act August 2026: what changes for high-risk AI systems and transparency obligations

9 February 202610 min20
AI Act August 2026: what changes for high-risk AI systems and transparency obligations

En bref

On 2 August 2026, the obligations of the AI Act enter into force for high-risk AI systems (Annex III) and transparency requirements (Article 50). Here is what concretely changes for providers and deployers, and how to prepare.

In six months, on August 2, 2026, the European Regulation on Artificial Intelligence (Regulation EU 2024/1689) becomes fully applicable. This date marks the simultaneous entry into force of two major blocks: the obligations for high-risk AI systems listed in Annex III, and the transparency rules of Article 50. For companies that develop, import or use AI systems in Europe, the countdown has begun — and preparation time is now counted in weeks, not years.

Why August 2, 2026 Is a Pivotal Date

The AI Act entered into force on August 1, 2024, but its application follows a progressive timeline. Two waves of obligations have already passed: the ban on unacceptable AI practices since February 2025, and the rules on general-purpose AI models (GPAI) since August 2025. The third wave, on August 2, 2026, is the heaviest in terms of operational impact for companies.

Concretely, three things happen on this date:

  • High-risk AI systems (Annex III) — Providers and deployers of AI systems used in eight sensitive areas must comply with all requirements of Chapter III of the regulation.
  • Transparency obligations (Article 50) — Any AI system that interacts with people, generates synthetic content or produces deepfakes must comply with disclosure and labeling rules.
  • Penalty regime — Member States must have transposed penalty rules, including administrative fines of up to €15 million or 3% of worldwide turnover.

A fourth wave will follow in August 2027 for AI systems integrated into already regulated products (medical devices, toys, vehicles — Annex I). But it is indeed August 2026 that constitutes the major operational shift.

The 8 High-Risk Areas of Annex III

Annex III of the regulation defines eight categories of use cases for which an AI system is automatically considered high-risk. Every company whose AI operates in one of these areas is directly concerned by the August 2026 deadline.

1. Biometrics

Remote biometric identification (real-time or post), biometric categorization based on sensitive attributes, and emotion recognition. Simple identity verification systems (confirming that a person is who they claim to be) are excluded.

2. Critical Infrastructure

AI systems used as safety components in managing water, electricity, gas, heating supply, as well as in road traffic management and critical digital infrastructure.

3. Education and Vocational Training

AI determining access to educational institutions, assessing learning outcomes, or guiding individuals' academic and professional paths.

4. Employment and Worker Management

Automatic CV screening, application analysis, employee performance evaluation, decisions on promotions or dismissals. This is one of the most common areas in companies.

5. Access to Essential Services

Assessment of eligibility for social benefits, credit scoring, insurance pricing, assessment of financial risks of natural persons.

6. Law Enforcement

Assessment of evidence reliability, profiling in investigations, assessment of recidivism risk.

7. Migration and Border Control

Assessment of migration risks, examination of asylum applications, detection of fraudulent documents.

8. Administration of Justice and Democratic Processes

AI systems used to assist courts in researching and interpreting facts and law, or to influence election outcomes.

"AI systems referred to in Annex III are considered high-risk [...] where they pose a significant risk of harm to the health, safety or fundamental rights of natural persons." — Article 6, Regulation (EU) 2024/1689

The Exception Clause: When an Annex III System Is Not High-Risk

Article 6, paragraph 3, introduces an important exemption. An AI system listed in Annex III may not be considered high-risk if it meets one of the following conditions:

  • Narrow procedural task — The system performs a well-defined and low-stakes task, such as converting unstructured data into structured data or sorting incoming documents.
  • Improvement of human activity — The AI complements a human action without decision-making autonomy, for example by rephrasing text or checking spelling.
  • Pattern detection — The system identifies deviations from previous human decisions without replacing human judgment.
  • Preparatory task — The AI performs preliminary work that will then be evaluated by a human.

Warning: this exemption never applies to profiling systems that process personal data to evaluate aspects of a person's life (performance, health, reliability, behavior...). These systems always remain high-risk. If you think your system falls under this exception, you must document your assessment before any placing on the market and provide it to authorities on request.

What Providers of High-Risk Systems Must Do

The obligations of providers (any entity that develops or has developed a high-risk AI system and places it on the market under its name) are detailed in Articles 8 to 21 of the regulation. Here are the main requirements to meet before August 2, 2026:

  1. Risk management system (Art. 9) — A continuous, planned and iterative process covering the entire system lifecycle. It includes identification of known and foreseeable risks, estimation and evaluation of these risks, and adoption of appropriate management measures.
  2. Data governance (Art. 10) — Training, validation and testing datasets must be relevant, sufficiently representative, and free from errors as far as possible. Specific practices apply when personal data is processed.
  3. Technical documentation (Art. 11 + Annex IV) — Detailed documentation describing the system, its purpose, performance, limitations and measures taken for compliance. It must be drafted before placing on the market and kept up to date.
  4. Automatic logging (Art. 12) — The system must automatically record relevant events throughout its operation to ensure traceability.
  5. Transparency and information (Art. 13) — Deployers must receive clear instructions for use, including the system's characteristics, capabilities and limitations.
  6. Human oversight (Art. 14) — The system must be designed to allow effective human supervision during its use.
  7. Accuracy, robustness and cybersecurity (Art. 15) — Appropriate levels must be achieved and documented, including resilience to manipulation attempts.
  8. CE marking and declaration of conformity (Art. 16, 47, 48) — Before placing on the market, the provider must affix CE marking and draft an EU declaration of conformity.
  9. Registration in the EU database (Art. 49, 71) — High-risk systems in Annex III must be registered in the EU public database.
  10. Post-market monitoring (Art. 72) — A continuous monitoring plan must be established to detect and correct problems after placing on the market.

This volume of requirements represents considerable work, often estimated at between 40 and 80 hours for the technical documentation alone of a complex AI system. Platforms like AiActo allow structuring and accelerating this process through guided forms and AI-assisted generation, section by section, compliant with Annex IV.

Deployer Obligations Should Not Be Neglected

Deployers — any legal person using a high-risk AI system in the course of their professional activity — have distinct but equally binding obligations (Article 26). The term "deployer" replaces the former term "user" to avoid confusion with end users.

As a deployer, you must notably:

  • Use the system in accordance with instructions provided by the provider.
  • Ensure human oversight by competent and trained persons.
  • Verify the relevance of input data in relation to the system's intended purpose.
  • Monitor the system's operation and report any serious incident to the provider and authorities.
  • Conduct a fundamental rights impact assessment (FRIA) if you are a public body or a private body providing essential public services.
  • Retain logs automatically generated by the system for an appropriate period.

For deployers who are also subject to GDPR, the data protection impact assessment (DPIA) and FRIA complement each other. The regulation indeed provides that the deployer can rely on the provider's documentation to conduct their own impact assessment.

Article 50: Transparency Obligations for All

Article 50, also applicable from August 2, 2026, concerns a much wider category of AI systems — not just high-risk systems. It applies to four specific situations:

Systems Interacting with People

Any AI system designed to interact directly with natural persons (chatbots, voice assistants, conversational agents) must inform the user that they are interacting with an AI, unless it is obvious from the circumstances.

Synthetic Content Generated by AI

Providers of AI systems that generate audio, image, video or text content must ensure that their outputs are marked in a machine-readable format and detectable as artificially generated. A Code of Practice on AI content marking and labeling is being finalized by the AI Office, with a final version expected in June 2026.

Emotion Recognition and Biometric Categorization

Deployers of emotion recognition or biometric categorization systems must inform exposed persons of the system's operation and process data in accordance with GDPR.

Deepfakes

Deployers of AI systems producing deepfakes (images, audio or video) must disclose that the content is artificially generated or manipulated. Exceptions exist for artistic, satirical content or content used in criminal investigations.

Penalties for Non-Compliance

The AI Act penalty regime is graduated according to the severity of breaches. From August 2, 2026, Member States must have transposed the following rules:

  • €35 million or 7% of worldwide turnover for violations of prohibited practices (Article 5) — already in force since February 2025.
  • €15 million or 3% of worldwide turnover for non-compliance with obligations related to high-risk systems (documentation, CE marking, registration, risk management...).
  • €7.5 million or 1% of worldwide turnover for breaches of transparency obligations.

For SMEs and start-ups, the regulation provides for proportionate fines, but the financial risk remains significant. Beyond penalties, non-compliance with the AI Act exposes to prohibitions on placing on the market and a major reputational risk.

6 Steps to Prepare Before August 2026

Six months is short. Here's a realistic action plan to approach the deadline calmly:

  1. Inventory your AI systems — Identify all AI systems that your organization develops, imports or uses. This includes third-party tools integrating AI (scoring, recommendation, decision automation).
  2. Classify each system — Determine the risk level of each system according to AI Act criteria. The free AiActo diagnostic allows this classification to be performed in a few minutes.
  3. Identify your role — Are you a provider, deployer, importer or distributor for each system? Your obligations differ according to your role in the value chain.
  4. Draft technical documentation — For high-risk systems, documentation compliant with Annex IV is mandatory. Start with the most critical systems.
  5. Establish governance — Designate persons responsible for human oversight, train your teams in AI literacy, and define monitoring procedures.
  6. Prepare conformity assessment — Depending on your system's category, assessment can be internal or require the intervention of a notified body. Anticipate delays.

Systems Already on the Market: What Does Article 111 Say?

If your high-risk AI system is already marketed or in service before August 2, 2026, the rules are as follows:

The regulation only applies if the system undergoes significant design modifications from that date. In other words, an unchanged system is not immediately subject to new requirements — but any major evolution triggers the obligation of full compliance.

Important exception: high-risk AI systems used by public authorities must comply with the regulation by August 2, 2026 at the latest, whether they have been modified or not. Administrations, public bodies and entities responsible for public service missions have no room for maneuver on this point.

Frequently Asked Questions

When do AI Act obligations for high-risk systems come into force?

Obligations for high-risk AI systems listed in Annex III of Regulation (EU) 2024/1689 come into force on August 2, 2026. Systems integrated into products already regulated by Annex I (medical devices, toys, vehicles) benefit from an additional delay until August 2, 2027.

How do I know if my AI system is classified as high-risk under the AI Act?

An AI system is classified as high-risk if it falls into one of the eight areas of Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) or if it is a safety component of a product covered by Annex I. A classification diagnostic tool can help you determine your risk level in a few minutes.

What are the penalties for non-compliance with the AI Act in 2026?

Fines range from €7.5 million (or 1% of turnover) for transparency breaches to €15 million (or 3% of turnover) for non-compliance with obligations related to high-risk systems. For prohibited practices, penalties rise to €35 million or 7% of worldwide turnover.

What is the difference between provider and deployer in the AI Act?

The provider is the entity that develops or has developed an AI system and places it on the market under its name or brand. The deployer is the entity that uses this system in the course of its professional activity. Their obligations differ: the provider bears the burden of compliant design, technical documentation and CE marking, while the deployer must ensure human oversight, monitoring and incident reporting.

Does Article 50 on transparency only concern high-risk systems?

No. Article 50 applies to all AI systems that interact with people, generate synthetic content or produce deepfakes, regardless of their risk level. This includes chatbots, image generators and voice synthesis tools, even if they are not classified as high-risk.

My AI system is already in service before August 2026, am I concerned?

If your system undergoes no significant modification after August 2, 2026, the obligations do not apply immediately (Article 111). However, if you are a public authority or if your system evolves substantially, compliance is required from that date.

August 2, 2026 is not an abstract deadline: it's when compliance with the AI Act moves from recommendation to obligation, with real penalties at stake. Starting with an inventory and classification of your AI systems remains the first step — the one that conditions all the following ones. The complete AI Act timeline on AiActo allows you to visualize all key dates and plan your compliance.

Partager cet article