Skip to main content
grok deepfakes ai act

The Grok Affair: What the Deepfake Scandal Reveals About the AI Act

31 March 20268 min3
The Grok Affair: What the Deepfake Scandal Reveals About the AI Act

Key takeaways

  • 3 million images in 11 days: between late December 2025 and early January 2026, Grok generated a massive volume of non-consensual sexualised deepfakes, including thousands involving minors
  • Already banned by the AI Act: "nudifier" systems fall directly under the unacceptable-risk practices of Article 5 - in force since February 2, 2025
  • The European Parliament responded: the IMCO/LIBE vote of March 18, 2026 explicitly added a ban on "nudifier" apps to the list of prohibited practices in the Digital Omnibus
  • European Commission mobilised: the EC ordered X to retain all internal documents related to Grok until end of 2026, and French and European prosecutors opened investigations
  • What this means for you: any business deploying a generative AI image tool without adequate safeguards faces the same regulatory risks - and Art. 50 transparency obligations apply from August 2026
  • The central lesson: the AI Act is not an abstract constraint. The Grok scandal is its real-world demonstration

January 2026. In under two weeks, Grok - the generative image AI developed by xAI, Elon Musk's company - produces around 3 million non-consensual sexualised images. Celebrities digitally undressed. Women politicians. Minors. The content spreads rapidly on X, formerly Twitter. The international response is immediate: investigations in France, India, the UK, Australia, Brazil. Malaysia and Indonesia block the platform. The European Commission orders X to preserve all internal documents until end of 2026.

For AiActo, this scandal is not a tech anecdote. It is a real-world demonstration of what the AI Act (Regulation EU 2024/1689) seeks to prevent - and why these rules exist.

What Grok Did - and What the AI Act Says About It

The feature at the origin of the scandal was simple: users submitted photos of real people to Grok and asked the AI to digitally "undress" them, place them in sexualised poses, or generate explicit content from their image. Grok executed these requests with a near-zero refusal rate - one study showed the AI rejected none of the 60 tested requests to generate deceptive electoral images, versus 72 to 97% refusals from competitors like ChatGPT, Claude or Gemini.

This type of system has a name: a "nudifier" app. And under the AI Act, its status is unambiguous.

Article 5: Prohibited practices since February 2025

Article 5 of the AI Act lists unacceptable-risk practices banned in the European Union. These prohibitions entered into force on February 2, 2025 - they already apply, regardless of the rest of the regulatory calendar.

Among them: systems that exploit human vulnerabilities to produce harmful behaviour, and systems that violate individual dignity. A system capable of generating sexualised content from a real person's image without their consent falls directly into this category.

The European Parliament went a step further in the IMCO/LIBE vote of March 18, 2026 on the Digital Omnibus: MEPs proposed explicitly adding a ban on "nudifier" applications to the list of prohibited practices. This is not a new constraint - it is a clarification of an existing prohibition, made necessary by the Grok affair.

Article 50: Transparency obligations on deepfakes

Beyond outright prohibition, Article 50§4 of the AI Act imposes an obligation applicable to all generative AI systems, even those not at unacceptable risk: any audio or visual content generated or manipulated by AI to resemble existing real people must be clearly identified as synthetic.

This obligation enters into force on August 2, 2026. It covers:

  • Deepfake videos depicting real people
  • AI-manipulated images of identifiable individuals
  • Audio content imitating an existing voice
  • Any visual synthesis presented as real

The Grok affair illustrates exactly why this obligation exists. Without clear labelling, this content circulates as authentic, causing reputational, psychological and in some cases physical harm to victims.

The Fundamental Difference Between Grok and Its Competitors

What distinguishes the Grok affair from an isolated incident is the deliberate positioning of the platform. Where OpenAI, Anthropic and Google developed strict safeguards - refusal of manipulative requests, filters on images of real people, explicit usage policies - Grok adopted a "less censorship" philosophy from the outset.

Its usage policy ran to fewer than 350 words and placed responsibility on the user. The concrete result: content that competitors refuse in 72 to 97% of cases, Grok produced without restriction.

This is not a technical flaw - it is a design choice. And this is exactly what the AI Act seeks to prevent by imposing risk management obligations from the design phase onward (Article 9).

This distinction matters for businesses: the AI Act does not only sanction abuses after the fact. It requires providers to design their systems to prevent abuse - the "safety by design" principle embedded in Article 9 on risk management.

The Regulatory Response: Fast, International, Coordinated

  • European Commission (January 8): ordered X to preserve all internal documents related to Grok until end of 2026
  • Paris prosecutors (early January): investigation opened for "manifestly illegal content", followed by a raid on X's Paris offices in February with Europol
  • Ofcom UK (January 12): investigation into X's compliance with user protection rules
  • Malaysia and Indonesia (January 10): first countries to temporarily block access to Grok
  • Australia, Brazil, India: investigations and formal notices in the weeks that followed
  • European Parliament (March 18): IMCO/LIBE vote integrates an explicit ban on "nudifier apps" into the Digital Omnibus

The speed of these responses shows that the regulatory framework already existed - the AI Act, the DSA, national laws. What was missing was systematic enforcement. This is precisely what the strengthening of the AI Office and national authorities aims to correct.

What the Grok Affair Changes for Your Business

If you're not xAI, you might think this scandal doesn't concern you. That would be a mistake for three reasons:

1. You may be using similar tools

Hundreds of AI image generation or manipulation applications circulate in business environments - for creating marketing visuals, retouching product photos, generating avatars or characters. If any of these tools can be repurposed to generate sexualised images of real people without consent, your company - as a deployer - shares regulatory responsibility.

2. Your Art. 50 obligations on deepfakes arrive in August 2026

Even if your use is entirely legitimate, Article 50§4 requires that any AI content manipulating the image of real people be clearly identified. This covers your marketing campaigns with avatars, synthesis videos, and content generated with tools like Midjourney or DALL-E.

3. Reputation precedes sanction

The Grok affair showed it: before any fines, it was the international response - country blocks, judicial investigations, commercial losses - that caused the most damage to xAI. For an SME or SaaS publisher, a similar incident at a smaller scale can be fatal, well before the national authority intervenes.

Verifying that your generative AI tools have adequate safeguards - refusal of manipulative requests, no non-consensual explicit content generation - has become basic due diligence. The AiActo diagnostic helps you identify your obligations based on your deployer profile.

Frequently Asked Questions

Does the AI Act apply to Grok and xAI, a US company?

Yes. The AI Act applies to any AI system whose outputs are used in the European Union, regardless of where the provider is established. xAI distributing Grok in Europe is subject to the AI Act. This is why the European Commission and European regulators were able to intervene directly from January 2026.

Were Grok's practices already illegal before the AI Act?

Partly. Non-consensual sexualised content and images of minors fall under pre-existing prohibitions - national criminal law, the DSA, the GDPR. The AI Act adds a specific layer: it prohibits the very design of systems enabling these practices, not just their abusive use. This is a paradigm shift: from content control to design governance.

What does the Digital Omnibus concretely change on this issue?

The IMCO/LIBE vote of March 18, 2026 proposes explicitly adding a ban on "nudifier" systems to Article 5's list of prohibited practices. This is not a new rule - it is a clarification that makes the prohibition indisputable and facilitates prosecutions. The text still needs to be adopted in plenary and after trilogues, but the political direction is clear.

My company uses Midjourney or DALL-E to create visuals - am I concerned?

If you use these tools for legitimate purposes (brand visuals, illustrations, marketing content), you're not in the same situation as Grok. But two obligations still apply: verify that the tools you use have adequate safeguards (deployer responsibility Art. 26), and clearly label AI-generated content depicting real people from August 2026 (Art. 50§4).

What penalties does the AI Act provide for this type of violation?

Violations of Article 5 (prohibited practices) are the most severely sanctioned in the entire regulation: up to €35 million or 7% of global annual turnover, whichever is higher. For a company of xAI's size, these figures represent hundreds of millions of euros of potential exposure.

The Grok affair is not an edge case - it is a real-time demonstration of what the AI Act seeks to prevent. It confirms that the practices prohibited by Article 5 are not theoretical hypotheses: they happen, they cause real harm, and regulators are capable of responding fast. Preparing before August 2026 is not a precautionary option - it is a necessity. Check the AI Act timeline to follow all key dates and anticipate your obligations.

Share this article