Generative AI at work: the concrete obligations for 2026.
The EU AI Act sets precise requirements for businesses that develop or deploy generative AI systems. With GPAI obligations already in force since August 2025 and mandatory transparency coming in November 2026, the regulatory framework is tightening. Here is what applies in practice.

2026: the year AI obligations become real for businesses
The AI Act is no longer a text in progress. Since August 2025, the first substantive obligations apply, and November 2026 marks the next critical deadline.
The European Artificial Intelligence Act (Regulation EU 2024/1689), which entered into force on 1 August 2024, rolls out its obligations according to a phased schedule over four years. For businesses using, integrating or developing generative AI systems, three regulatory waves overlap in 2025-2026. Prohibited practices have been applicable since 2 February 2025. Obligations relating to General-Purpose AI (GPAI) models have applied since 2 August 2025. The obligation to inform end users comes into force on 2 November 2026.
Ignoring this timeline exposes organisations to significant financial penalties and growing reputational risk. National market surveillance authorities are being designated across Member States and will begin exercising their inspection powers as soon as their mandate is formally established.
Provider or deployer: a distinction that changes everything
The AI Act organises responsibility around two main roles. Identifying yours is the first step of any compliance exercise.
The provider
A provider is any entity that develops an AI system and places it on the market or puts it into service, whether commercially or not. If your organisation uses an API from a large language model to build a product or service for third parties, you are legally a provider. You must produce technical documentation, assess your system's conformity and affix the CE marking if your system is classified as high-risk.
The deployer
A deployer is any legal person that uses an AI system in a professional context without having developed it. If your organisation subscribes to a SaaS tool integrating generative AI - a writing assistant, meeting summariser or support chatbot - you are a deployer. Your obligations are less extensive, but they exist: informing users, ensuring human oversight over significant decisions, following the provider's instructions and training staff.
GPAI: the requirements already in force since August 2025
Articles 51 to 56 of the AI Act create a specific regime for General-Purpose AI models. These rules apply to providers placing a foundation model on the market: large language model, image generation model, multimodal model.
Every GPAI model provider must comply with four baseline obligations. It must establish and maintain technical documentation describing the model's architecture, training data and capabilities. It must respect copyright law during the training phase and publish a compliance policy. It must make a summary of training data publicly accessible. And it must supply downstream operators with the information they need for their own compliance.
Models posing systemic risk
GPAI models trained with compute exceeding 10^25 FLOPs fall into the systemic risk category. For these models, obligations are reinforced: adversarial evaluations under standardised protocols, notification of serious incidents to the EU AI Office, and implementation of cybersecurity measures proportionate to identified risks.
Article 50: the transparency obligation enters into force on 2 November 2026
Article 50 is the obligation that will affect the greatest number of businesses before year end. It requires clear disclosure to users who interact with AI systems or consume AI-generated content.
Chatbots and conversational agents
Any deployer of an AI system intended to interact directly with natural persons must inform those persons in real time that they are interacting with an AI. This obligation applies unless the interaction with an AI is obvious from context. It targets publicly accessible customer support chatbots, virtual assistants embedded in applications, and AI agents to which third parties have access.
Synthetic content and deepfakes
Providers of systems generating synthetic images, audio, video or text must mark that content in a machine-readable manner. Deployers who distribute such content must explicitly disclose it as AI-generated. Limited exceptions exist for satire, parody and certain national security uses.
High-risk systems: are you affected before 2027?
Annex III of the AI Act lists the domains in which an AI system is presumed to be high-risk. The main deadline for these systems is 2 December 2027, but preparatory obligations apply now to providers who wish to be ready in time.
Affected sectors include human resources management (CV screening, performance evaluation, promotion decisions), access to education, access to essential services (credit, insurance, social benefits), administration of justice and democratic processes. If your generative AI tool is embedded in any of these processes, it may be classified as high-risk regardless of its underlying technology.
The AI Act's logic is functional, not technological. The same language model used to draft newsletters is minimal risk. Used to score job applicants, it becomes high-risk. It is the actual use case that determines classification, not the underlying model. This distinction requires businesses to document precisely how and in what context their AI tools are deployed.
Six concrete obligations to implement before November 2026
Whatever your position in the AI value chain, here are the priority actions to ensure compliance before the next regulatory deadline.
- Map your AI systems: inventory all AI tools used or deployed by your organisation, and identify their likely classification (minimal risk, high-risk, GPAI).
- Qualify your legal role: for each system, determine whether you are a provider or deployer. A product built on a third-party API makes you a provider with substantial obligations.
- Check for prohibited practices: subliminal manipulation, exploitation of vulnerabilities, generalised social scoring - verify that no system in production crosses these lines since February 2025.
- Prepare transparency notices: draft user information notices for chatbots and generative systems. The 2 November 2026 deadline is approaching.
- Train staff (Art. 4): the AI literacy obligation has been in force since 2 February 2025. Documented training adapted to the systems actually in use is the minimum baseline expected.
- Document and track: maintain a register of conformity assessments, human oversight measures and incidents. This register will be indispensable in the event of an inspection by a market surveillance authority.
Our free diagnostic tool helps you identify in 3 minutes the obligations that apply to your organisation, based on your sector and use of AI.
Legal framework: the key articles to know
Is your organisation ready for the 2026 AI obligations?
Identify in 3 minutes the obligations that apply to your organisation: provider or deployer status, systems in scope, priority deadlines. The diagnostic is free and requires no registration.
Frequently asked questions
Answers to the most common questions about AI obligations for businesses in 2026.
A company using a third-party AI tool in a professional context is a deployer under the AI Act. It must follow the provider's instructions, inform end users about the use of AI, ensure human oversight over significant decisions, and provide its staff with AI literacy (Art. 4, in force since February 2025). If it builds a product on top of a third-party API, it becomes a provider with more extensive obligations.
A provider is the entity that develops and places an AI system on the market. A deployer uses it in a professional context without having developed it. The provider bears the burden of technical documentation, CE marking and conformity assessment. The deployer must follow the provider's instructions, inform users and ensure human oversight of automated decisions affecting natural persons.
Yes, if your chatbot interacts with natural persons and the context does not make the interaction with an AI self-evident. The obligation enters into force on 2 November 2026. A chatbot used internally and clearly identified as AI by all its users may benefit from the contextual exception. A customer-facing chatbot accessible to the public does not benefit from this exception and will require an explicit disclosure.
Non-compliance with the transparency obligations under Article 50 can result in a fine of up to 15 million euros or 3% of global annual turnover, whichever is higher. Violations of prohibited practices (Art. 5) carry heavier penalties: up to 35 million euros or 7% of global turnover.
Partially. Providers of open-source GPAI models benefit from exemptions on certain documentation obligations, provided the model's weights are publicly accessible. However, if the model poses systemic risk (training compute exceeding 10^25 FLOPs), those exemptions do not apply: the reinforced obligations are mandatory even for open-source models.
Classification depends on the use case, not the model. A generative AI tool becomes high-risk if it is used in a sector listed in Annex III: human resources, education, credit, insurance, justice. If your tool helps screen applications, score credit files or make decisions affecting access to rights, it is likely high-risk, regardless of the underlying model.
Yes. Article 4 entered into application on 2 February 2025. It requires providers and deployers to ensure that staff working with AI systems possess a sufficient level of AI literacy. Documented training tailored to the systems actually in use constitutes the minimum baseline expected by market surveillance authorities.
