Skip to main content
article 50 ai acttransparence ia générativedeepfakes ai actétiquetage contenu iawatermarking intelligence artificielle

Article 50 AI Act: Everything You Need to Know About Transparency Obligations for AI-Generated Content

9 March 202613 min50
Article 50 AI Act: Everything You Need to Know About Transparency Obligations for AI-Generated Content

Key takeaways

Article 50 of the AI Act applies to ALL generative AI systems — not just high-risk ones. Chatbots, deepfakes, images, text: here are the 4 transparency obligations, the timeline and the concrete measures to implement before August 2026.

In the AI Act, high-risk systems and prohibited practices capture most of the attention. Yet it is Article 50 that will affect the greatest number of organisations. Its scope is far broader: it applies to all generative AI systems, from chatbots to image generators, from speech synthesis tools to writing assistants. Any business that uses or provides a system capable of producing synthetic text, images, audio or video is affected — and the deadline is 2 August 2026.

What Article 50 says

Article 50 of Regulation (EU) 2024/1689 establishes transparency obligations for four distinct situations involving AI systems. Unlike obligations for high-risk systems (Chapter III), these obligations do not depend on risk classification: they apply whenever an AI system falls into one of the following four cases.

Obligation 1 — Disclose AI interactions (Art. 50(1))

Providers must design their AI systems so that persons are informed they are interacting with an AI system rather than a human being. This obligation targets chatbots, virtual assistants and any AI system in direct contact with natural persons.

Exception: where the context of use makes it obvious that the system is an AI. In practice, however, this exception is interpreted narrowly.

Obligation 2 — Mark synthetic content (Art. 50(2))

Providers of generative AI systems — including GPAI models — must ensure that generated content (audio, image, video, text) is:

  • Marked in a machine-readable format — via watermarking, metadata or other provenance techniques
  • Detectable as artificially generated or manipulated — technical solutions must be effective, interoperable, robust and reliable

Marking solutions must withstand common alterations: cropping, compression, changes in resolution or format. Providers must also make available free detection tools with confidence scores.

Exceptions: AI systems performing an assistive function for standard editing, or that do not substantially alter the data provided by the deployer. Systems authorised by law for detecting or prosecuting criminal offences are also exempt.

Obligation 3 — Inform about emotion recognition (Art. 50(3))

Deployers of emotion recognition or biometric categorisation systems must inform the natural persons exposed about the operation of the system. This information must be provided before exposure.

Obligation 4 — Label deepfakes and AI text (Art. 50(4))

This is the most scrutinised obligation. Deployers of AI systems that generate or manipulate content constituting a deepfake must disclose that the content has been artificially generated or manipulated. This obligation also applies to AI-generated text published to inform the public on matters of public interest.

A deepfake, as defined by the AI Act (Art. 3(60)), is image, audio or video content generated or manipulated by AI that resembles real persons, objects, places or events and would be likely to mislead a person into believing it is authentic.

Exceptions:

  • Content forming part of an evidently artistic, creative, satirical or fictional work — minimal and non-intrusive disclosure suffices
  • Content authorised by law for detecting or prosecuting criminal offences
  • AI text that has undergone human review and for which a natural or legal person has assumed editorial responsibility

Who is affected: providers vs deployers

Article 50 allocates responsibilities between two actors in the value chain, with complementary obligations.

Providers: upstream technical marking

Providers (who develop and place AI systems on the market) are responsible for technical marking: imperceptible watermarking, provenance metadata, integrated detection mechanisms. They must also provide tools enabling deployers to fulfil their own obligations.

The first draft Code of Practice specifies that upstream providers (foundation models) must integrate marking 'by design' before market placement, so that downstream system providers can meet their obligations.

Deployers: downstream visible labelling

Deployers (who use AI systems under their own responsibility) must ensure visible labelling for end users. It is the deployer who must display the common icon, insert disclaimers and ensure persons are informed of the artificial origin of content.

Personal, non-professional use of an AI system does not make the user a deployer. However, publishing content to a broad audience may qualify as deployment, even for an individual.

The Code of Practice on AI content transparency

To support the implementation of Article 50, the AI Office launched the development of a Code of Practice on the marking and labelling of AI-generated content in late 2025. Although voluntary, this code is set to become the compliance benchmark — regulators and courts will use it to assess business practices.

Two specialised working groups

The code is being drafted by independent experts, with input from stakeholders across the AI value chain, organised into two working groups:

  • WG1 — Marking and detection techniques: watermarking, metadata, provenance solutions, interoperability, robustness against removal, technical governance
  • WG2 — Disclosure of deepfakes and AI text: visible labelling, content taxonomy, distinction between 'fully AI-generated' and 'AI-assisted' content, platform responsibilities

During the second round of meetings in January 2026, discussions focused on the interplay between technical marking and visible labelling, the need to avoid information fatigue for users, and coherence with other EU legislation (notably the Digital Services Act).

The common "AI" icon

The draft Code proposes a harmonised common icon at EU level to identify AI-generated content. Pending finalisation of an EU-wide interactive symbol, the interim solution is a visual label containing the acronym "AI" (or its local equivalent: "IA" in French, "KI" in German).

The "fully AI-generated" vs "AI-assisted" taxonomy

The Code introduces a distinction between two categories of content:

  • Fully AI-generated — Content created autonomously by the system, without human-authored authentic content
  • AI-assisted — Content involving significant human contribution, with AI in an assistive role

This distinction has important legal and commercial implications. Content labelled "fully AI-generated" could be considered as lacking sufficient human creative contribution for European copyright protection — potentially making it free to reuse by third parties.

Code of Practice timeline

  • December 2025 — First draft published
  • March 2026 — Second draft published (incorporating stakeholder feedback)
  • June 2026 — Final version expected
  • August 2026 — Article 50 obligations come into force

Concrete measures by content type

The draft Code details modality-specific disclosure measures.

Real-time deepfake video

The AI icon must be displayed in a non-intrusive but permanent manner throughout the exposure. A disclaimer must be inserted at the beginning of the broadcast.

Recorded deepfake video

A disclaimer at the beginning of the video or the AI icon displayed permanently in a fixed position. For artistic or fictional works, the icon must appear for at least 5 seconds.

Deepfake images

The common icon must be placed consistently at every exposure, in a fixed position within the image.

Deepfake audio

A short audible disclaimer in plain language must be inserted at the beginning. For content longer than 30 seconds, the disclaimer must be repeated. For artistic works, a non-intrusive disclaimer at the beginning suffices.

AI text on matters of public interest

The icon must be displayed in a fixed, clear and distinguishable position — at the top of the text, beside the text, in the colophon or after the closing sentence.

Interaction with other EU legislation

Article 50 does not function in isolation. It interacts with several European texts:

  • Digital Services Act (DSA) — Platforms that are both deployers and online intermediaries must combine Article 50 obligations with DSA requirements (content moderation, transparency, risk assessments)
  • GDPR — Emotion recognition and biometric categorisation systems (Art. 50(3)) raise data protection questions
  • GPAI obligations — For GPAI models (Art. 51-56), Article 50 transparency obligations are cumulative with model-specific provider obligations
  • High-risk systems — AI systems that are both generative and high-risk must comply with both Article 50 transparency obligations and Chapter III requirements

What businesses should do now

With the August 2026 deadline, organisations have less than 6 months to prepare. Here are the priority steps.

1. Map affected AI systems

Identify all AI systems in your organisation that generate or manipulate content (text, image, audio, video) or interact directly with persons. Include third-party tools used by your teams (writing assistants, image generators, chatbots).

2. Determine your role in the value chain

Are you a provider (you develop the AI system) or a deployer (you use an existing AI system)? Obligations differ. Many companies are both for different systems.

3. Assess existing marking capabilities

For providers: do your systems integrate watermarking, provenance metadata and detection solutions? Are they robust against common alterations? Are they interoperable with emerging standards?

4. Review publication workflows

For deployers: how is AI-generated content published in your organisation? Are labelling processes in place? Are marketing, communications and editorial teams trained?

5. Anticipate copyright implications

The "fully AI-generated" vs "AI-assisted" classification may affect your content's copyright protection. Assess the implications for your intellectual property.

Do you use generative AI systems and want to assess your transparency obligations? The free AiActo diagnostic identifies your role (provider or deployer) and your specific obligations under the AI Act.

Frequently asked questions

Does Article 50 apply to AI systems that are not high-risk?

Yes. This is precisely what makes Article 50 so important. Transparency obligations apply to all generative AI systems, regardless of risk classification. A simple chatbot or image generator is affected just as much as a high-risk AI system.

When do Article 50 obligations come into force?

2 August 2026. This is the same deadline as for Annex III high-risk AI systems. The Digital Omnibus could modify certain aspects of the timeline, but Article 50 is not directly affected by the proposed delays.

Is the Transparency Code of Practice mandatory?

No, the Code of Practice is voluntary. However, it will very likely be used as a compliance benchmark by regulators and courts to assess whether an organisation meets its Article 50 obligations. Not aligning with it may attract closer scrutiny from authorities.

Do I need to label text written with AI assistance?

The AI text labelling obligation (Art. 50(4)) applies to texts published to inform the public on matters of public interest. Texts that have undergone human review and for which a person has assumed editorial responsibility are exempt. An internal email or working draft is not affected.

How do I mark an AI-generated image?

Two levels of marking are needed. The provider integrates imperceptible technical marking (watermarking, C2PA/IPTC metadata). The deployer applies visible labelling — the common "AI" icon in a fixed position. Both levels are complementary.

Are artistic or satirical works exempt?

Partially. Deepfakes forming part of an evidently artistic, creative, satirical or fictional work benefit from a lighter regime: minimal and non-intrusive disclosure suffices (e.g. icon displayed for 5 seconds for video). But total exemption does not exist — a minimum level of transparency remains mandatory.

Article 50 is the AI Act obligation that will affect the most organisations, well beyond providers of high-risk systems. With the entry into force in August 2026 and a Code of Practice taking shape, businesses that start preparing marking and labelling of their AI content now will gain a head start — both on compliance and user trust.

Share this article

Related articles