Skip to main content
ai act pmeconformité ai act 2026obligations déployeur ai actclassification risques ia

AI Act and SMEs: What You Must Do to Comply Before August 2026

13 March 20269 min15
AI Act and SMEs: What You Must Do to Comply Before August 2026

Key takeaways

On August 2, 2026, AI Act obligations come into full force for high-risk AI systems and transparency rules. SMEs are not exempt — whether they develop or use AI systems, they have concrete obligations to meet. Here's a practical guide to prepare before the deadline.

When AI Act compliance comes up, large tech companies tend to dominate the conversation. SMEs often assume the regulation doesn't apply to them — or at least not yet. That assumption could be costly.

On August 2, 2026, obligations enter into force for high-risk AI systems and transparency requirements. An SME using an AI-powered HR platform, a credit scoring tool, a customer service chatbot, or an automated recruitment system is — knowingly or not — a deployer under the AI Act. The obligations apply to it.

SMEs and the AI Act: Who Exactly Is Covered?

The AI Act distinguishes two main roles:

  • The provider: the entity that develops and places an AI system on the market
  • The deployer: the entity that uses an AI system in the course of a professional activity

The vast majority of SMEs are deployers. They didn't build the AI model — they subscribed to a SaaS platform, integrated a tool into their CRM, or connected an API. But that doesn't exempt them. Article 26 of the regulation defines their obligations precisely.

Some SMEs are also providers — those that develop and sell software with embedded AI. If that's your case, the requirements are significantly heavier (technical documentation per Annex IV, CE marking, EU database registration). This article focuses on the most common scenario: the SME as a deployer.

Step 1 — Inventory Your AI Systems

You can't comply with what you haven't identified. The first concrete action is to build a comprehensive inventory of every AI system used in your organisation:

  • HR software with automated scoring (recruitment, performance evaluation)
  • Customer-facing tools incorporating a chatbot or generative AI
  • Credit scoring or financial analysis systems
  • Employee monitoring or work analytics tools
  • Any SaaS solution mentioning "AI", "machine learning" or "predictive" in its documentation

For each tool identified, note: the vendor name, the exact AI function, the data processed, and the individuals affected.

Step 2 — Classify the Risks

Once the inventory is complete, classify each system according to the AI Act's risk levels:

  • Unacceptable risk (prohibited): mass social scoring, subliminal manipulation, certain biometric uses
  • High risk (Annex III): AI in recruitment, employee management, access to essential services, education
  • Limited risk: chatbots, generative tools, deepfakes — transparency obligations apply (Article 50)
  • Minimal risk: spam filters, content recommendations — no specific obligations

Classification determines the intensity of your obligations. An AI CV screening tool falls under "high risk" (Annex III, section 4): the requirements that apply are substantial.

Step 3 — Meet Your Deployer Obligations

For systems classified as high risk, Article 26 requires deployers to:

  • Effective human oversight: significant decisions produced by the system must be reviewable, contestable, and correctable by a competent human. This oversight must be real, not just formal.
  • Log retention: you must retain automatically generated system logs in line with your legal obligations, to enable traceability in the event of a dispute or audit.
  • Use in line with provider instructions: you cannot repurpose the system beyond its intended use. If the provider restricts certain uses in its documentation, those limits are binding on you.
  • Informing affected individuals: anyone subject to a decision made or influenced by a high-risk AI system must be informed (Article 50 and Article 26(6)).
  • Fundamental rights impact assessment (Article 27): mandatory for public bodies and certain essential services operators. Check whether this applies to your sector.

Step 4 — Handle Transparency for All Other Systems

Even if none of your systems are classified as high risk, Article 50 applies to all tools that interact with humans or generate content:

  • A chatbot deployed on your website must clearly tell users they are speaking to an AI
  • A content generation tool (text, images, video) must indicate that content is AI-generated
  • Any voice synthesis content imitating a human voice must be identified as such

These transparency obligations are frequently underestimated by SMEs, but they apply from August 2, 2026, regardless of the system's risk level.

Step 5 — Engage With Your SaaS Vendors

As a deployer, you have the right — and the responsibility — to verify that the tools you use are themselves compliant. Ask your vendors these specific questions:

  • Is your system classified as high risk under the AI Act?
  • Do you have technical documentation compliant with Annex IV?
  • What training data was used, and how is it documented?
  • What human oversight mechanisms do you provide for your customers?
  • Are you registered in the EU database for high-risk systems?

A vendor that cannot answer these questions is a compliance risk for your organisation. This should be reflected in your contracts.

What You Risk by Doing Nothing

The AI Act establishes a tiered penalty regime:

  • Up to €35 million or 7% of global annual turnover for prohibited practices (Article 99(1))
  • Up to €15 million or 3% of global annual turnover for non-compliance with high-risk system obligations
  • Up to €7.5 million or 1% of global annual turnover for providing inaccurate information to authorities

For SMEs, the regulation explicitly requires supervisory authorities to account for organisational size and available resources. But there is no blanket exemption: an SME cannot ignore the regulation — it can only benefit from proportionate enforcement.

Where to Start Concretely

The good news: you don't need a dedicated legal team to get started. Three actions are enough to initiate the process:

  1. Inventory — list all your SaaS tools and ask whether each includes an AI component
  2. Classification — for each identified tool, determine its risk level against Annex III
  3. Documentation — start recording your decisions, oversight processes, and exchanges with vendors

These three steps can be completed within days. They form the foundation of any robust compliance file — and the evidence, in the event of an audit, that you acted in good faith before the deadline.

The AiActo AI Act diagnostic guides you through this classification and automatically generates the first elements of your compliance file, tailored to your profile as a deployer or provider.

Share this article

Related articles