AI Act Technical Documentation: The Complete Guide to Article 11 and Annex IV

En bref
Article 11 of the AI Act requires providers of high-risk AI systems to prepare technical documentation compliant with Annex IV. Here are the 9 mandatory sections, their expected content, and how to structure your documentation before August 2026.
How long does it take to produce a technical file compliant with the AI Act? Estimates range from 40 to 80 hours for a complex AI system — and that assumes the team documented its design choices from the start. Article 11 of Regulation (EU) 2024/1689 requires every provider of a high-risk AI system to prepare complete technical documentation before placing it on the market. Its minimum content is defined by Annex IV, which lists 9 sections covering the entire lifecycle of the system.
What Article 11 requires
Article 11 establishes three fundamental principles governing technical documentation for high-risk AI systems.
A mandatory file before market placement
Technical documentation must be prepared before the AI system is placed on the market or put into service, and kept up to date throughout its lifecycle. This is not a retrospective document: it accompanies the system's development from its design phase.
A dual purpose: demonstrate and enable assessment
The technical file serves two functions. It must first demonstrate that the system meets the requirements of Articles 8 to 15 (risk management, data governance, record-keeping, transparency, human oversight, accuracy and cybersecurity). It must then provide national authorities and notified bodies with the information necessary to assess compliance, in a clear and comprehensible form.
"The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements [...] and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance." — Article 11(1), Regulation (EU) 2024/1689
A simplified form for SMEs
Article 11 provides that SMEs and startups may supply the elements of Annex IV in a simplified manner. The Commission is to establish a form adapted to the needs of small and microenterprises. Notified bodies are required to accept this form for conformity assessment — a significant relief for smaller operators.
A single file for regulated products
Where a high-risk AI system is linked to a product covered by Union harmonisation legislation (Annex I, Section A — medical devices, toys, vehicles), a single set of documentation is drawn up, incorporating the requirements of both the AI Act and the sectoral legislation. This provision avoids duplication for products already subject to a technical documentation regime.
The 9 sections of Annex IV
Annex IV defines the minimum content of the technical documentation. Each section corresponds to an aspect of the system that the provider must document comprehensively. Here is a detailed look at each one, with key considerations for drafters.
Section 1 — General description of the AI system
This section provides an overall portrait of the system. It must include:
- Intended purpose — The system's intended use, the provider's name, the version and its relationship to previous versions
- Interactions — How the system interacts with hardware, software or other AI systems that are not part of the system itself
- Software versions — Versions of relevant software and update requirements
- Forms of market placement — The different configurations under which the system is marketed (SaaS, embedded, API)
- Hardware — The hardware on which the system is intended to run
- Instructions for use — The instructions for deployers in accordance with Article 13
The goal is that an external assessor can understand what the system does, in what context it operates and how it is distributed, without needing deep technical expertise.
Section 2 — System development and design process
This is the most extensive section. It covers the entire development process:
- Design specifications — The general logic of the system and the algorithms used
- Key design choices — Structural decisions and their rationale, including assumptions regarding the persons or groups of persons the system is intended to be used on
- Classification choices — The main categorisation decisions, what the system is designed to optimise for and the relevance of the different parameters
- Expected output — A description of the expected output and output quality
- Technical trade-offs — The compromises made between different technical solutions to meet the requirements of Chapter III, Section 2
- System architecture — How the different components work and the computational resources used
- Data — Training, validation and test datasets: provenance, characteristics, collection process, preparation, labelling, cleaning and governance measures (Article 10)
- Human oversight — Measures provided in accordance with Article 14, including technical means to facilitate interpretation of outputs by deployers
- Pre-determined changes — Where applicable, planned changes to the system and its performance, with technical solutions ensuring continued compliance
A critical point: the regulation requires documenting the why behind decisions, not just the what. Trade-offs, assumptions and compromises must be explicitly stated and justified.
Section 3 — Monitoring, functioning and control
This section describes the monitoring and control mechanisms built into the system. It covers traceability, event logging (Article 12), human oversight capabilities and alert mechanisms. The provider must demonstrate that the system produces sufficient information for deployers to exercise effective supervision.
Section 4 — Performance metrics
The provider must describe the appropriateness of the performance metrics chosen for the specific AI system. Reporting scores alone is not enough: you must justify why the chosen metrics (precision, recall, F1, AUC, error rate) are relevant to the intended purpose and usage context of the system.
This section includes test results and evaluations conducted to verify compliance with accuracy, robustness and cybersecurity requirements (Article 15), including bias testing and performance evaluations across different population subgroups.
Section 5 — Risk management system
A detailed description of the risk management system in accordance with Article 9. This includes:
- Risk identification — Known and reasonably foreseeable risks to health, safety and fundamental rights
- Analysis and evaluation — Estimation of the likelihood and severity of each identified risk
- Mitigation measures — Measures adopted to eliminate or reduce each risk, including accepted residual risks
- Testing and validation — Test procedures used to evaluate the effectiveness of mitigation measures
The risk management system must be iterative and continuous, covering the entire AI system lifecycle — not just the development phase.
Section 6 — Lifecycle changes
A description of relevant changes made by the provider to the system throughout its lifecycle. Every significant change must be documented: algorithm updates, retraining, architecture modifications, changes in training data. Version traceability is essential.
Section 7 — Applicable harmonised standards
A list of harmonised standards applied in full or in part, whose references have been published in the Official Journal of the European Union. In the absence of harmonised standards (which remains the case as of February 2026 — CEN and CENELEC expect to publish them by late 2026), the provider must supply a detailed description of the solutions adopted to meet the requirements of Chapter III, Section 2, including a list of other relevant standards and technical specifications used.
Section 8 — EU declaration of conformity
A copy of the EU declaration of conformity referred to in Article 47. This formal document commits the provider to compliance with all applicable requirements. It is inseparable from the CE marking.
Section 9 — Post-market monitoring
A detailed description of the system for evaluating performance in the post-market phase, in accordance with Article 72. This section includes the post-market monitoring plan, which defines how the provider will continue to monitor the system, collect deployer feedback, detect performance drift and report serious incidents.
Common mistakes to avoid
Preparing technical documentation is the most underestimated obligation of the AI Act. Here are the most frequent pitfalls.
Documenting after the fact
Building the technical file after development is extremely difficult. Design choices, assumptions about training data and technical trade-offs are often undocumented if the process is not integrated from the start. The best approach is to capture compliance evidence within development workflows as the project progresses.
Confusing description with demonstration
Annex IV does not merely ask you to describe the system. It requires demonstrating that processes, controls and safeguards have actually been implemented. An assessor will look for verifiable evidence: actual test results, audit logs, risk analyses that genuinely influenced design decisions.
Neglecting data traceability
The training data section (part of Section 2) is one of the most heavily scrutinised areas. Typical gaps include insufficiently documented provenance of datasets, limited evidence of bias testing, and missing traceability between data governance decisions and technical implementation.
Producing a static document
Technical documentation is a living document. It must be updated with every significant system modification, retraining event, newly discovered vulnerability or change in usage context. A file dated from market placement and never updated does not meet the requirement to keep it current.
Connection with other provider obligations
Technical documentation does not function in isolation. It connects with the full set of provider obligations under Articles 8 to 21.
Quality management system (Article 17)
Article 17 requires the provider to maintain a documented quality management system covering compliance strategy, design techniques, examination procedures, data management and maintenance of technical documentation. The QMS and Annex IV documentation reinforce each other: the QMS defines processes, the technical documentation provides the evidence.
Documentation retention (Article 18)
The provider must retain technical documentation for 10 years after the AI system is placed on the market. Authorities may request access at any time during this period. This retention requirement also applies to automatically generated logs (Article 19), which must be kept for at least 6 months.
Conformity assessment (Article 43)
Technical documentation is the foundation of the conformity assessment. Depending on the case, this assessment takes the form of an internal control (Annex VI) or an evaluation by a notified body (Annex VII). In both cases, the assessor relies on the Annex IV file to verify compliance with requirements.
Registration in the EU database (Article 49)
Before deployment, Annex III high-risk AI systems must be registered in the European database (Article 71). The information required for registration (Annex VIII) partially overlaps with that of the technical documentation.
Structure your documentation in 7 steps
Here is a pragmatic approach to building a compliant technical file, even without harmonised standards yet available.
- Classify your system — Confirm that your system qualifies as high-risk under Article 6. Classification determines whether Annex IV applies
- Conduct a gap analysis — Compare your existing documentation (specifications, READMEs, model cards, test reports) against the 9 sections of Annex IV. Identify the gaps
- Structure the file — Create a template aligned with the 9 sections. Each section corresponds to a distinct deliverable, with identified owners on the team
- Integrate capture into workflows — Connect documentation to existing tools (MLflow, Weights & Biases, versioning systems). Evidence should be generated automatically during development, not reconstructed afterwards
- Document choices and trade-offs — For each significant design decision, record the context, options considered, choice made and its rationale. This is what assessors look for first
- Validate with an internal audit — Before the formal conformity assessment, simulate an audit by verifying that each section answers the questions a notified body would ask
- Plan for continuous updates — Define update triggers (retraining, data changes, incidents, software updates) and the associated responsibilities
Preparing this documentation represents considerable effort, especially for teams that have not documented their AI development processes from the outset. Platforms like AiActo help structure and accelerate this process through guided forms covering each section of Annex IV, with AI-assisted generation section by section and a review editor to adjust every sentence before PDF export.
Timeline and regulatory context
The technical documentation obligation takes effect according to the AI Act timeline:
- 2 August 2026 — Deadline for high-risk AI systems under Annex III (biometrics, employment, education, essential services, etc.)
- 2 August 2027 — Deadline for systems embedded in regulated products (Annex I — medical devices, toys, vehicles)
The Digital Omnibus, proposed by the Commission in November 2025, provides for a maximum 16-month extension for Annex III systems (backstop date of 2 December 2027), conditional on the availability of harmonised standards. This text is under discussion in Parliament and Council, and its adoption is not guaranteed before August 2026.
In the absence of harmonised standards (CEN/CENELEC aim for publication by late 2026), providers must rely on the text of the regulation itself and on common specifications the Commission may adopt. This situation makes it all the more important to begin technical documentation without delay.
Penalties for non-compliance
A provider placing a high-risk AI system on the market without compliant technical documentation faces fines of up to €15 million or 3% of global annual turnover (Article 99). For SMEs and startups, amounts are capped at the lower of the two thresholds, but remain significant.
Frequently asked questions
When does AI Act technical documentation become mandatory?
Technical documentation compliant with Annex IV is mandatory from the moment a high-risk AI system is placed on the market or put into service. For Annex III systems, the deadline is 2 August 2026. For those embedded in regulated products (Annex I), it is 2 August 2027. The Digital Omnibus proposes a maximum extension to 2 December 2027 for Annex III.
How many sections does Annex IV technical documentation contain?
Annex IV comprises 9 mandatory sections: general description, development and design, monitoring and control, performance metrics, risk management, lifecycle changes, applied standards, EU declaration of conformity, and post-market monitoring plan.
Do SMEs benefit from a lighter regime for technical documentation?
Yes. Article 11 provides that SMEs and startups may supply the elements of Annex IV in a simplified manner, using a form the Commission is to establish. Notified bodies are required to accept this form for conformity assessment. The required content remains the same, but the format and level of detail are adapted.
How long must technical documentation be retained?
Article 18 requires retention for 10 years from the date the high-risk AI system is placed on the market. National competent authorities may request access to this documentation at any time during that period.
What is the difference between Annex IV and Annex XI for technical documentation?
Annex IV applies to providers of high-risk AI systems (Article 11). Annex XI applies to providers of general-purpose AI models (GPAI) (Article 53). The content differs: Annex IV focuses on the complete system and its usage context, while Annex XI covers the underlying model, its training data and energy consumption.
What should you do if harmonised standards are not yet available?
In the absence of harmonised standards, Section 7 of Annex IV requires a detailed description of the solutions adopted to meet the requirements of Chapter III, Section 2. The provider must list alternative standards and technical specifications used (ISO 42001, ISO/IEC 23894, sector-specific standards) and explain how they cover the regulation's requirements.
Technical documentation is the central pillar of AI Act compliance for providers of high-risk systems. It translates the full set of regulatory requirements into a verifiable file. Organisations that integrate this documentation process from the earliest stages of development will gain a decisive advantage — not only in terms of compliance, but also in the quality and traceability of their AI systems.