19 Million Records Stolen the Day the EU Launched Its AI Age Verification: The Signal Nobody Read

Key takeaways
- Extraordinary timing : the ANTS breach (19 million records) and the EU AI age verification launch happened on the same day, April 15, 2026
- ANTS was central to France's age verification plan : the breached platform was meant to be a backbone of the social media age gate rolling out in September 2026
- AI age verification = high-risk under AI Act : any AI system using biometrics to verify age falls under Annex III, point 1 of Regulation (EU) 2024/1689
- What the AI Act would have required : Articles 9 and 10 mandate documented security audits and data governance - an IDOR vulnerability would not have survived this process
- August 2026 deadline : companies building AI age verification solutions must produce their AI Act technical documentation now
On April 15, 2026, two events happened simultaneously in Europe. The European Commission announced that its age verification solution was technically ready for member states to deploy. At the same time, France's National Agency for Secure Documents suffered a cyberattack exposing the data of 18 to 19 million French citizens, through a vulnerability that any junior developer should have caught in hours. This is not a footnote. It is a demonstration of what happens when large-scale identity data systems operate without documented, auditable security governance - and a direct preview of what the AI Act is designed to prevent for every AI system that follows.
What happened: no ambiguity
ANTS (Agence Nationale des Titres Sécurisés, or France Titres) manages applications for passports, driving licenses, vehicle registrations and national identity cards. It is one of the most data-dense identity platforms in France. On April 15, a hacker operating as "breach3d" exploited an IDOR vulnerability in the API of moncompte.ants.gouv.fr.
An IDOR flaw means the system did not verify whether the person making a request had the right to access the requested resource. Changing a single identifier in the URL was enough to pull up a different citizen's profile. Names, dates of birth, email addresses, login identifiers: all accessible. The IDOR vulnerability has been in the OWASP Top 10 for over a decade. The hacker himself called it "a really stupid flaw."
This type of vulnerability is taught in the first year of web development. Finding it on a government platform managing millions of identity documents raises one question: where was the documented security assessment? Where was the audit trail?
ANTS as the backbone of France's age verification system
What makes this incident particularly significant from an AI Act perspective is the context in which it occurred. France committed to deploying an age verification system for social media platforms by September 1, 2026, under legislation designed to protect minors under 15. The national identity infrastructure, of which ANTS is a central component, was intended to underpin this system.
On the same day as the breach, the European Commission announced that its own age verification solution was "technically ready to be implemented". This solution relies on cryptographic proof mechanisms allowing users to prove they are over a certain age without sharing additional personal data. It is designed to interoperate with the EU digital identity wallets expected by end 2026.
In other words: on the day France planned to rely on its national identity infrastructure to control access to social media, that same infrastructure demonstrated it could not withstand an elementary attack. The signal is hard to miss.
What the AI Act says about age verification systems
An AI-powered age verification system, whether it uses biometric age estimation through facial recognition or cross-references identity data to validate a claim, is not an ordinary tool under Regulation (EU) 2024/1689.
Classification: high-risk with no exception available
Annex III, point 1 of the AI Act classifies all AI systems intended for biometric identification and categorization of natural persons as high-risk. Age verification through biometric analysis (facial scan, estimation from physical characteristics) falls directly into this category. No exception clause exists when biometric profiling is involved.
The resulting obligations are the most demanding in the regulation: technical documentation under Annex IV, documented risk management system (Article 9), training data governance (Article 10), formalized human oversight (Article 14), CE marking, registration in the EU database (Article 49).
The biometric red line to watch
Some age verification approaches cross an even more sensitive line. Article 5(1)(h) of the AI Act prohibits real-time remote biometric identification in publicly accessible spaces, with narrow exceptions tied to national security. A system performing real-time facial recognition to determine a user's age on a mass consumer platform potentially enters this prohibited zone.
The distinction between "biometric age verification" and "prohibited remote biometric identification" is not always obvious. This is precisely the kind of grey area that AI Act technical documentation is designed to clarify - and that the absence of documentation leaves entirely on the deployer in the event of an inspection.
What the AI Act would have required at ANTS, and what it requires from AI solution developers
ANTS is not directly subject to the AI Act for its identity document management portal - the breached platform is an administrative service, not an AI system. But the incident illustrates exactly what the AI Act imposes on systems that do fall within its scope.
Article 9: documented risk management
Article 9 requires providers of high-risk AI systems to maintain an iterative process of risk identification, estimation and mitigation. This process explicitly includes security risks - and specifically the risk of unauthorized access to processed data. An IDOR vulnerability on an API exposing identity data is exactly the type of risk that this process should have identified and documented.
Article 10: uncompromising data governance
Article 10 imposes strict governance requirements on data used by high-risk AI systems: origin traceability, quality controls, bias verification, and security measures appropriate to the sensitivity level. For a system processing biometric or identity data at scale, "appropriate security measures" cannot coexist with absent access controls on an API.
This is not a theoretical requirement. It is a documentary deliverable that the CNIL, designated as France's national AI Act supervisory authority since February 2026, can request to review at any point after August 2, 2026.
What AI age verification developers need to do now
Companies building AI age verification solutions - whether responding to public tenders or developing B2B products for platforms - have a narrow window before two converging deadlines: September 1, 2026 for the French age gate mandate, August 2, 2026 for the AI Act.
- Classify the system precisely - determine whether the solution falls under Annex III (biometrics) or risks approaching the prohibited practices of Article 5. The boundary between age estimation and biometric identification must be explicitly documented.
- Produce the Annex IV technical documentation - all 9 mandatory sections, including architecture description, training data governance, risk management system and post-deployment monitoring plan.
- Document security measures under Article 9 - formally list identified risks (including unauthorized access risks) and corresponding mitigation measures. This is the section that, had it existed for ANTS, would have forced identification of the IDOR flaw.
- Anticipate the FRIA (Article 27) - if the solution is deployed by a public body, a fundamental rights impact assessment is mandatory. Age verification for minors on social media is exactly the type of system concerned.
- Verify Article 5 compatibility - confirm that the chosen verification method does not constitute real-time remote biometric identification in a publicly accessible space.
AI Act documentation is not administrative formality. It is the process that forces the right questions to be asked before deployment - and keeps a record that they were. What the ANTS breach reveals is the absence of this process on a critical system.
Platforms like AiActo help development teams structure this documentation section by section, with AI-assisted generation and a human validation editor before export. For providers under time pressure, this is the difference between having compliant documentation on August 2 and not having it.
The real signal to retain
The ANTS incident of April 15, 2026 is not a lesson about public sector cybersecurity. It is a lesson about what happens when systems processing identity data at scale operate without documented, auditable governance.
The AI age verification systems being deployed in the coming months are subject to the AI Act. They process biometric or identity data on millions of users, including minors. Articles 9, 10 and Annex IV exist precisely to prevent a critical system from going live with a flaw that its own creator never formally assessed.
The April 15 signal was not subtle. The question is how many teams have read it.
Building or deploying an AI age verification system or a system processing biometric data? The free AiActo diagnostic identifies your risk level and AI Act obligations in under 3 minutes.
Is an AI age verification system covered by the EU AI Act?
Yes, without exception. Any AI system that uses biometrics to verify a user's age falls under Annex III, point 1 of Regulation (EU) 2024/1689. It is automatically classified as high-risk, with the heaviest obligations in the regulation: Annex IV technical documentation, Article 9 risk management, Article 10 data governance, CE marking. No exception clause applies when biometric profiling is involved.
What is the difference between biometric age verification and prohibited biometric identification under the AI Act?
Article 5(1)(h) of the AI Act prohibits real-time remote biometric identification in publicly accessible spaces. A system performing a real-time facial scan to estimate a user's age on a consumer platform may fall into this prohibited category depending on technical implementation. The distinction must be explicitly documented in the Annex IV technical documentation before any deployment takes place.
What does the ANTS breach reveal about AI Act requirements?
The IDOR vulnerability exploited in the ANTS breach would have had to be identified by a risk management process compliant with Article 9 of the AI Act. That article requires providers of high-risk AI systems to formally document security risks and corresponding mitigation measures. The incident illustrates exactly what this process is designed to prevent for biometric AI systems that fall within the AI Act's scope.
What is the compliance deadline for an AI age verification system?
August 2, 2026 is the main deadline for high-risk AI systems under Annex III, including biometric age verification systems. This nearly coincides with the planned entry into force of France's social media age gate (September 1, 2026). Providers must produce their Annex IV technical documentation and obtain CE marking before this date.
Is a Fundamental Rights Impact Assessment mandatory for an age verification system?
Yes, in most configurations. Article 27 of the AI Act requires deployers of high-risk biometric AI systems deployed by public bodies or in contexts affecting fundamental rights to conduct a FRIA. An age verification system processing identity data of millions of users including minors falls squarely within this perimeter. The FRIA is separate from the GDPR DPIA and must be conducted in addition to it.