Try all features for free — 3 credits included on sign-upTry for free
Skip to main content
Regulation · AI Act Compliance

AI surveillance at work: what the AI Act truly prohibits.

Since 2 February 2025, analysing employees' emotions has been prohibited. However, productivity monitoring remains a grey area. Which tools are affected? What risks do employers face?

Jérémy Pierre
Jérémy Pierre
AI Act Compliance Expert
6 May 2026 8 min read
AI surveillance at work: what the AI Act truly prohibits from 2025
Key takeaways · 4 figures to remember
2 February 2025
Prohibition of emotion inference in the workplace
€35M
Maximum penalty for using a prohibited system
Annex III
Productivity monitoring potentially high-risk
GDPR + AI Act
Dual legal framework for employee monitoring
01 - Regulation

What the AI Act prohibits from 2 February 2025

Article 5 of the AI Act explicitly prohibits emotion inference in the workplace. This measure came into force over a year ago, but its contours remain unclear for many businesses.

Article 5(1)(f) of the EU Regulation prohibits the placing on the market, putting into service, or use of AI systems designed to infer the emotions of a natural person in the workplace context. This prohibition covers several practices:

  • Analysis of micro-facial expressions via camera or video.
  • Detection of stress or fatigue through voice recognition.
  • Assessment of engagement in meetings via intonation or eye movements.
  • Any tool using biometric sensors to deduce an emotional state.

Exceptions are rare and strictly regulated. They mainly concern medical or safety uses, such as detecting drowsiness in professional drivers. For all other cases, the prohibition is absolute.

"Emotion inference in the workplace is not merely a compliance issue. It is an ethical and legal red line that Europe has drawn to protect employees' fundamental rights."
02 - Market tools

Market tools: who is affected by the prohibition?

Several popular tools incorporate emotion analysis features. Their status under the AI Act is now critical for employers.

Below is a non-exhaustive list of tools affected by the prohibition under Article 5:

HireVue
Video analysis of candidates to assess their suitability for a role, including emotion detection via facial expressions.
Prohibited since 2 February 2025 for HR uses in Europe.
Affectiva
Emotion analysis solution based on facial and voice recognition, used in training or evaluation contexts.
Prohibited for professional uses in Europe.
Microsoft Teams (advanced features)
Some experimental features analyse participant engagement in meetings via vocal intonation and eye movements.
Under evaluation by the AI Office. Use not recommended in its current state.
Otter.ai
Automatic meeting transcription with tone and engagement analysis features.
Under evaluation for professional uses in Europe.

Providers of these tools have until 2 November 2026 to comply with the transparency obligations under Article 50, but the prohibition under Article 5 already applies. Employers using these solutions risk penalties of up to €35 million or 7% of their global turnover.

03 - Productivity monitoring

The grey area of productivity monitoring

The AI Act does not prohibit productivity monitoring, but it may be classified as high-risk depending on the methods used. Explanation.

Productivity monitoring tools are not prohibited under Article 5, but they may be classified as high-risk if they fall within the scope of Annex III. This annex covers, in particular:

  • AI systems used for workforce management and access to employment (point 4 of Annex III).
  • Tools creating behavioural profiles of employees.
  • Solutions evaluating performance in an automated manner.

Below are some market tools and their potential status under the AI Act:

Time Doctor
Tracks time spent on applications, random screenshots, keyboard/mouse activity analysis.
Likely high-risk if used for performance evaluation.
Hubstaff
GPS tracking, screenshots, real-time activity monitoring.
Likely high-risk for HR uses.
ActivTrak
Analysis of work habits, distraction detection, activity reports.
Likely high-risk if used for HR decisions.
Microsoft Viva Insights
Analysis of work habits, productivity improvement recommendations, integration with Teams and Outlook.
Uncertain status, depending on use and configuration.

For these tools, classification depends on their specific use. Basic time tracking may remain low-risk, while automated performance evaluation becomes high-risk. Employers must document their uses and assess risks on a case-by-case basis.

04 - GDPR

GDPR: the second framework to comply with for employee monitoring

The AI Act does not replace the GDPR. Employers must comply with both regulations or face cumulative penalties.

Employee monitoring is already regulated by the GDPR, particularly through:

  • The principle of data minimisation (Article 5(1)(c)).
  • The right of employees to be informed (Articles 13 and 14).
  • The obligation to conduct a data protection impact assessment (DPIA) for high-risk processing (Article 35).
  • The mandatory consultation of employee representatives (Works Council) for monitoring systems (Article L. 2312-38 of the Labour Code).

The CNIL has published specific recommendations on employee monitoring in teleworking. It states that monitoring tools must be proportionate and transparent. For example:

  • Random screenshots are permitted if justified and limited.
  • Real-time keyboard/mouse activity tracking is considered intrusive and should be avoided.
  • Employees must be informed about monitoring systems and their purposes.

The CJEU's Barbulescu II ruling (2017) already set strict limits on monitoring employees' communications. The AI Act strengthens this framework by prohibiting certain practices and classifying others as high-risk.

05 - Compliance

How to achieve compliance with the AI Act and GDPR?

Employers must take action on several fronts to avoid penalties and protect employees' rights.

Here is a compliance checklist:

  1. Audit the tools used: Identify all AI tools used for monitoring or evaluating employees. Check if they fall within the scope of Article 5 or Annex III of the AI Act.
  2. Classify AI systems: For each tool, determine whether it is prohibited, high-risk, or low-risk. Document this classification.
  3. Comply with GDPR obligations: Inform employees about monitoring systems, conduct a DPIA if necessary, and consult the Works Council.
  4. Adapt uses: Disable prohibited features (emotion analysis) and limit high-risk uses to strictly necessary cases.
  5. Train teams: Raise awareness among managers and HR about the legal limits of monitoring. Train employees on their rights.
  6. Document compliance: Maintain a register of AI processing activities, retain evidence of compliance, and prepare responses in case of inspection.

Employers can rely on the AI Office's guidelines and the CNIL's recommendations to guide their approach. If in doubt, a compliance assessment can help identify risks specific to an organisation.

Does your company use AI tools for monitoring?

Identify your AI Act and GDPR obligations in 3 minutes with our free assessment.

06 - FAQ

Frequently asked questions

Answers to the most common questions about AI surveillance at work and the AI Act.

Emotion inference involves using AI systems to deduce an employee's emotional state from biometric or behavioural data. This includes analysing facial expressions, vocal intonation, eye movements, or other physiological signals. For example, a tool that assesses an employee's stress level by analysing their voice during a meeting uses emotion inference.

The prohibition is based on several fundamental principles. First, emotion inference infringes on employees' right to privacy and dignity. Second, these technologies are often unreliable and can lead to discrimination or biased assessments. Finally, Europe considers that the workplace should not become a space of intrusive surveillance, where employees' emotions are constantly analysed and used to make decisions about them.

Penalties for using a system prohibited under Article 5 of the AI Act can reach €35 million or 7% of the company's global turnover, whichever is higher. In addition to fines, the company faces significant reputational risks and potential legal action from employees or trade unions. Finally, non-compliant tools may be withdrawn from the market, causing operational disruptions.

No, productivity monitoring tools are not prohibited under Article 5 of the AI Act, unless they include emotion inference features. However, depending on how they are used, these tools may be classified as high-risk under Annex III. For example, a tool that automatically evaluates employee performance and influences HR decisions, such as promotions or dismissals, will likely be considered high-risk and subject to strict obligations.

The GDPR imposes several obligations on employers who monitor their employees. First, they must inform employees about monitoring systems and their purposes. Second, they must comply with the principle of data minimisation, limiting data collection to what is strictly necessary. A data protection impact assessment (DPIA) is mandatory for high-risk processing. Finally, employee representatives (Works Council) must be consulted before implementing any monitoring system.

Jérémy Pierre
Jérémy Pierre
Founder aiacto.eu · AI Act Compliance Expert

Supports AI providers and deployers in achieving regulatory compliance.

Share this article