Try all features for free — 3 credits included on sign-upTry for free
Skip to main content
ai content identificationarticle 50

Your AI Images and Videos Must Be Identified by November 2026: What It Means

20 April 20267 min17
Your AI Images and Videos Must Be Identified by November 2026: What It Means

Key takeaways

  • November 2, 2026 - fixed date: the European Parliament set this date for the machine-readable labelling obligation on AI content, under the Digital Omnibus currently in trilogue
  • Article 50 AI Act - partially already in force: the obligation to inform users they are interacting with an AI (chatbots) applies from August 2026. Machine labelling of generative content arrives in November.
  • Who is affected: any business that generates and publicly distributes images, videos, audio or text produced by AI - marketing campaigns, editorial content, avatars, synthetic voices
  • Two levels of obligation: visible labelling for deepfakes (identifiable real people), machine-readable labelling for all generative content distributed at scale
  • Not optional: Article 50 violations are subject to fines up to €7.5M or 1% of global turnover - applies to both providers and deployers
  • Tools are adapting: OpenAI, Google and Mistral are deploying watermarking in their APIs - but it's up to you to verify it's active and preserved through your publishing pipeline

You use Midjourney for visuals, ChatGPT for content, or an AI voice synthesis tool for your podcasts? You'll need to say so clearly - not just in your privacy policy. The AI Act (Regulation EU 2024/1689) imposes a content labelling obligation arriving at a precise date, fixed under the Digital Omnibus currently being finalised: November 2, 2026.

This date is not conditional. Unlike the Annex III high-risk obligations being delayed to December 2027, content labelling is maintained on the accelerated schedule by both European institutions. In less than seven months, your content production workflow needs to integrate these requirements.

What Article 50 AI Act Actually Says

Article 50 covers four distinct transparency obligations with different application dates and scopes:

Obligation 1 - Chatbots and conversational AI (Art. 50§1)

Applicable from August 2, 2026. Anyone interacting with an AI system - customer service chatbot, virtual assistant, automated agent - must be informed they are talking to a machine. Exception: when context makes the AI nature obvious.

Obligation 2 - Emotion recognition and biometric systems (Art. 50§3)

Systems analysing emotions or categorising individuals by biometric characteristics must inform the people concerned. Applicable from August 2, 2026.

Obligation 3 - Deepfakes and manipulated content (Art. 50§4)

The broadest obligation for content-producing businesses. Any audio or visual content generated or manipulated by AI to resemble real people, places or events must be clearly labelled as artificial or manipulated. The label must be visible and understandable to the recipient.

Concrete cases covered:

  • AI avatar videos resembling real people in your campaigns
  • AI-generated "customer testimonial" photos
  • Synthetic voices imitating a known presenter or personality
  • AI presenter videos presenting product news or information
  • AI-generated or heavily retouched images of real places presented as authentic

Obligation 4 - Machine-readable labelling (Art. 50§2) - the big November 2026 addition

The most technically structuring obligation. Providers of generative AI systems (images, audio, video, text) must ensure their outputs are marked in a machine-readable format - commonly called "watermarking". This invisible file-level marking allows platforms, regulators and detection algorithms to automatically verify the AI origin of a piece of content.

The key distinction: Obligation 3 (Art. 50§4) targets deployers - you must display the label. Obligation 4 (Art. 50§2) targets model providers - it's OpenAI, Midjourney, ElevenLabs that must embed the marking in their outputs. But if you use these tools and the marking is absent or stripped, you remain responsible as a deployer.

What the Digital Omnibus Changes on the Timeline

  • Chatbots Art. 50§1: maintained at August 2, 2026 - no change proposed
  • Deepfakes Art. 50§4: maintained at August 2, 2026 for new obligations
  • Machine labelling Art. 50§2: Parliament proposes November 2, 2026; Council proposes February 2, 2027. Final date to be set in trilogue - either way, prepare now.

The Provider Deployment Status

  • OpenAI: progressively deploying C2PA marking in DALL-E generated images. Metadata embedded in the image file includes AI origin and an authenticity hash.
  • Google: integrates SynthID in Imagen and Lyria (audio) models. The watermark is invisible and resistant to common modifications (resizing, compression).
  • Adobe: pioneer of the C2PA standard, integrated in Firefly and Photoshop.
  • Mistral: progressive deployment on generative image models.

Important: marking is often present in file metadata - but can be stripped during format conversion, aggressive compression, or upload to social platforms that "clean" metadata.

What You Need to Do Before November 2026

  1. Inventory your AI content generation tools: list all tools producing images, videos, audio or text - Midjourney, DALL-E, ElevenLabs, HeyGen, Synthesia... For each, check whether they integrate C2PA or equivalent marking in their exports.
  2. Audit your publishing workflow: identify steps where marking metadata could be stripped - format conversion, social media upload, web compression.
  3. Add visible labels on your deepfakes: for any AI-generated image or video of a real person, add a visible mention - "AI Generated", "Synthetic Image", "AI Voice". This is the Art. 50§4 obligation, applicable from August 2026.
  4. Update your T&Cs and legal notices: clearly state that your site or application uses generative AI to produce content, and specify the markings applied.
  5. Train your content and marketing teams: teams publishing content need to know what is subject to the labelling obligation and how to verify watermark presence before publishing.

The AiActo diagnostic helps you identify whether your AI uses fall within Article 50's scope and prepare your compliance documentation ahead of the deadlines.

The AI content labelling obligation is not an abstract constraint - it's the regulatory response to documented abuses: electoral deepfakes, fake celebrity endorsements, synthetic videos presented as real. For businesses using generative AI legitimately, compliance is straightforward and low-cost. Check the AI Act timeline to plan your steps before November 2026.

Frequently Asked Questions on AI Content Labelling

When does the AI content labelling obligation come into force in Europe?

The visible labelling obligation for deepfakes and manipulated content (Article 50§4 of the AI Act) applies from August 2, 2026. The machine-readable marking obligation (Article 50§2 - watermarking) enters into force on November 2, 2026 according to the European Parliament's position under the Digital Omnibus, or February 2, 2027 according to the Council's position. The final date will be set by the trilogue agreement expected in late April 2026.

What types of AI content must be labelled under the AI Act?

Article 50 of the AI Act requires identification of several categories: deepfakes and audio/visual content generated or manipulated by AI depicting real people, places or events (Art. 50§4); outputs of large-scale generative AI systems with machine-readable marking (Art. 50§2); and chatbot interactions where users must be informed they are talking to a machine (Art. 50§1). Artistic works clearly presented as fictional benefit from an exception.

My company uses AI-generated images for marketing: am I affected?

Yes, if those images depict real people or are presented as authentic photographs. Article 50§4 requires clear labelling for any AI content depicting identifiable real individuals - including AI avatar customer testimonials, AI-generated "satisfied customer" photos, or synthetic presenter videos. If your visuals are clearly presented as illustrations or artistic creations, the obligation is lighter. The AiActo diagnostic can identify your precise obligations.

What is the C2PA standard and is it sufficient to comply with the AI Act?

The C2PA (Coalition for Content Provenance and Authenticity) is an open technical standard that embeds cryptographic metadata in media files to certify their origin. Backed by Adobe, Microsoft, Google and Sony, it is currently the most advanced standard for meeting the Article 50§2 machine-readable marking obligation. The AI Act does not mandate a specific technical standard, but C2PA is the reference being deployed by AI tool providers. A file with intact C2PA metadata satisfies the machine marking obligation - provided that metadata is not stripped during processing or publication.

What penalties apply if my business doesn't label its AI content?

Violations of Article 50 transparency obligations are subject to fines of up to €7.5 million or 1% of global annual turnover, whichever is higher. These penalties apply to both AI tool providers (who failed to embed marking) and deployers (who distributed content without the required identification). In France, ARCOM is the competent authority for digital content, coordinating with the CNIL and DGCCRF depending on the case.

Does AI content labelling apply to texts generated by ChatGPT?

The machine-readable marking obligation (Art. 50§2) applies to generative text distributed at scale by AI systems. For standard professional use - drafting an email or article that is reviewed and edited by a human - visible labelling is not automatically required. However, if text is generated and published directly without genuine human oversight, or represents a fake testimonial or false quote attributed to a real person, the transparency obligation applies.

Share this article