Skip to main content
chatgpt ai actgenerative ai obligationscopilot ai act

ChatGPT at Work: Does the AI Act Apply to Your Business?

26 March 20268 min1
ChatGPT at Work: Does the AI Act Apply to Your Business?

Key takeaways

  • You are a deployer: any business that uses a generative AI tool in a professional context is a "deployer" under the AI Act, with specific obligations from August 2026
  • Article 50 - the universal rule: if you deploy an AI chatbot to your customers or employees, you must inform them they are interacting with an AI. No exception for company size.
  • Deepfakes concern you: any AI-generated video, image or audio depicting a real person must be clearly labelled as synthetic content
  • ChatGPT itself is a GPAI model: OpenAI has been subject to Articles 51-56 obligations since August 2025. But your responsibility as a professional user exists independently
  • High-risk possible: if you use a generative AI tool for HR decisions, scoring or access to essential services, you may fall under high risk (Annex III)
  • No SME exemption for Article 50: transparency obligations apply to companies of all sizes, from the first chatbot deployed

ChatGPT, Copilot, Gemini, Claude, Mistral - these tools have become professional reflexes for millions of European employees. Drafting emails, summarising documents, generating marketing content, automated customer support... Generative AI has slipped into business workflows faster than regulation can follow. But the AI Act (Regulation EU 2024/1689) is here, and it very likely concerns you more directly than you think.

The most common misconception: "I didn't build ChatGPT, so I'm not a provider, so the AI Act doesn't really affect me." That's wrong. The regulation distinguishes two main roles, and one of them - the deployer - applies precisely to businesses that use these tools.

Provider vs Deployer: Where Do You Stand?

Article 3 of the AI Act defines two distinct categories:

  • The provider: the entity that develops and commercialises the AI system. OpenAI for ChatGPT, Microsoft for Copilot, Google for Gemini. They bear the heaviest obligations - technical documentation, CE marking, EU database registration.
  • The deployer: the entity that uses an AI system in the course of a professional activity. That's you, as soon as you integrate ChatGPT into your processes, deploy a Copilot chatbot on your website, or automate tasks with an LLM.

As a deployer, you don't need to write 80 pages of technical documentation. But you have real, concrete, enforceable obligations from August 2026.

Article 50: The Rule That Applies to Everyone

This is the least known and yet most universal provision of the AI Act. Article 50 imposes transparency obligations on all AI systems that interact with humans or generate content - regardless of risk level, regardless of company size.

Obligation 1 - Inform users they are talking to an AI

If you deploy a chatbot powered by ChatGPT, Copilot or any other LLM on your website, application or internally for your employees, you must clearly inform people that they are interacting with an AI system (Article 50§1).

This obligation applies whenever the interaction is not obvious by nature. A customer service chatbot must identify itself as an AI. An automated HR assistant must be labelled as such. Saying "answered by our team" when it's actually GPT-4 is a direct violation.

Legitimate exceptions: interactions in professional contexts where the AI nature is manifest and known to all participants, and artistic or fictional uses where an AI character is explicitly scripted.

Obligation 2 - Label deepfakes and manipulated content

Any video, image or audio content generated or manipulated by AI to resemble an existing real person must be clearly identified as synthetic content (Article 50§4). This applies to:

  • Marketing videos using AI avatars resembling real people
  • Synthetic voices imitating existing voices
  • AI-generated photos depicting identifiable individuals
  • Deepfake videos used for corporate communications

Obligation 3 - Machine-readable watermarking (from November 2026)

Article 50§2 requires AI-generated content to be marked in a machine-readable format. This obligation falls primarily on generative model providers - OpenAI, Mistral, Google. But as a deployer, you'll need to ensure the tools you use actually implement these mechanisms. The date adopted by the European Parliament under the Digital Omnibus is November 2, 2026 for systems already on the market.

When ChatGPT Becomes High-Risk in Your Organisation

By default, standard ChatGPT use for drafting emails or summarising documents falls under limited risk - only Article 50 applies. But the risk level depends on the use case, not the tool.

Your use of ChatGPT or an LLM can shift to high risk (Annex III) if you use it to:

  • Screen candidates or evaluate employees: any system influencing hiring, promotion or dismissal decisions is high-risk (Annex III, section 4)
  • Assess creditworthiness or credit risk: any automated financial scoring falls under Annex III, section 5
  • Guide access decisions for public services: if your AI influences decisions on social rights, benefits or access to essential services
  • Generate risk profiles on individuals: in an insurance, security or risk management context involving natural persons

In these cases, Article 26 deployer obligations are added: documented human oversight, log retention, informing affected individuals.

The practical rule: it's not the tool that determines the risk level - it's what you do with it. ChatGPT for writing a newsletter is limited risk. ChatGPT embedded in an employee performance rating tool is high risk.

ChatGPT and GPAI Obligations - What You Need to Know

OpenAI, Google, Anthropic, Mistral and other language model providers have been subject to Articles 51 to 56 obligations since August 2, 2025. As a professional user, this has practical implications:

  • These providers must publish a training data summary (Article 53) - you can ask to consult it
  • They must maintain technical documentation available to supervisory authorities
  • They must implement copyright compliance policies for training data
  • If a model presents systemic risk (more than 10^25 training FLOPs), enhanced obligations apply - GPT-4 and Gemini Ultra likely fall in this category

These obligations don't exempt you from your own deployer responsibilities. They stack.

What You Need to Do Before August 2026

  1. Inventory all your generative AI uses: ChatGPT, Copilot, Gemini, Claude, tools embedded in your CRM or HR system... List everything, with the exact use case for each tool.
  2. Classify each use by risk level: standard editorial use = limited risk (Article 50 only). Use influencing decisions about people = check Annex III.
  3. Audit your customer-facing interfaces: do you have an AI chatbot in production? Does it clearly tell users they're talking to an AI? If not, that's a violation to fix now.
  4. Update your legal notices and T&Cs: transparency also applies in contractual documents. If you use AI to process client data or produce content addressed to them, it must be mentioned.
  5. Check your SaaS contracts: is the tool you're using itself compliant with the AI Act? Ask your provider. Their answer conditions your own exposure.

The free AiActo diagnostic guides you through this classification in under 3 minutes - and tells you precisely whether your generative AI uses expose you to Article 50 only, or to additional high-risk obligations.

Frequently Asked Questions

My company uses ChatGPT internally only - am I still concerned?

Yes, if that internal use affects employees. Article 50 requires informing people who interact with an AI, including your own staff. And if ChatGPT is used to evaluate performance or guide HR decisions, you shift to high risk with Article 26 obligations.

OpenAI is American - does the AI Act apply to them?

Yes. The AI Act applies to any AI system whose outputs are used in the European Union, regardless of where the provider is established. OpenAI must comply with GPAI obligations for European use of ChatGPT. This is why documentation and transparency obligations apply to them from August 2025.

What's the difference between free ChatGPT and ChatGPT Enterprise?

From an AI Act perspective, what matters is your use, not the version. ChatGPT Enterprise offers additional contractual guarantees on data confidentiality - relevant for GDPR - but your AI Act obligations as a deployer are the same regardless of which version you use.

Does AI-generated content always need to be labelled?

Not systematically. The disclosure obligation primarily concerns deepfakes and manipulated content (Article 50§4), and direct interactions with AI chatbots (Article 50§1). Marketing text drafted with ChatGPT assistance and reviewed by a human is not automatically subject to a disclosure requirement. However, if that text is generated and published without genuine human oversight, the question arises.

What do I risk if I don't bring my chatbot into compliance before August 2026?

Violations of Article 50 are subject to fines of up to €7.5 million or 1% of global annual turnover. These penalties apply to both providers and deployers. National authorities - in France, the DGCCRF and CNIL depending on the case - will have audit and enforcement powers from August 2026.

Generative AI entered businesses through the back door - a ChatGPT subscription here, a Copilot extension there. The AI Act now forces a direct look at what these uses imply from a regulatory standpoint. The good news: obligations for standard use are manageable. A clear label on your chatbot, an update to your T&Cs, an inventory of your tools. It's not insurmountable - as long as you act before the deadline. Check the AI Act timeline to plan your next steps.

Share this article