Skip to main content
fria ai actfria deployerarticle 27 ai act

Who Must Conduct a Fundamental Rights Impact Assessment, How, and By When?

23 March 20268 min9
Who Must Conduct a Fundamental Rights Impact Assessment, How, and By When?

Key takeaways

  • Article 27 obligation: the FRIA (Fundamental Rights Impact Assessment) is required from deployers of high-risk AI systems in specific contexts - public bodies, operators of essential services and education or vocational training providers
  • Distinct from the DPIA: the FRIA goes beyond personal data protection - it assesses the impact on all fundamental rights (non-discrimination, dignity, access to justice, freedom of expression...)
  • Before deployment: the FRIA must be completed before the system is put into service, not after. It is a precondition for legitimate deployment.
  • 15 minimum fields: the regulation requires structured content covering the system description, potentially affected rights, mitigation measures and a monitoring plan
  • Mandatory registration: FRIA results must be registered in the EU database of high-risk AI systems
  • August 2026 deadline: affected deployers must complete their FRIA before August 2, 2026, unless delayed by the Digital Omnibus

When it comes to AI Act compliance, the technical documentation required from providers tends to attract most of the attention. Yet deployers have their own specific obligations - including one assessment that is frequently overlooked: the FRIA, or Fundamental Rights Impact Assessment. Required by Article 27 of Regulation (EU) 2024/1689 from specific categories of deployers, it must be completed before any high-risk AI system is put into service.

Very little has been written about the FRIA in plain language. The result: many affected organisations do not know they are subject to it, or confuse it with the GDPR's Data Protection Impact Assessment (DPIA). This guide clarifies both.

What Is the FRIA and Where Does It Come From?

The FRIA is a structured assessment that the deployer of a high-risk AI system must conduct to measure the potential impact of that system on the fundamental rights of the individuals it affects. It draws inspiration from existing impact assessments in EU law - notably the GDPR's DPIA - but its scope is considerably broader.

Where the DPIA focuses on risks related to personal data processing, the FRIA covers the full scope of the EU Charter of Fundamental Rights:

  • Human dignity and personal integrity
  • The right to non-discrimination (Article 21 of the Charter)
  • Data protection and respect for private life
  • Freedom of expression and information
  • The right to an effective remedy and a fair trial
  • Children's rights and the rights of vulnerable persons
  • Equality between women and men
  • Workers' rights, including conditions of employment

In practice, the FRIA requires the deployer to answer a fundamental question: "By using this AI system, do I risk infringing the basic rights of the people this system affects?"

Who Is Subject to the FRIA?

Contrary to what its name might suggest, the FRIA does not apply to all deployers of high-risk systems. Article 27 restricts it to specific categories:

Public law bodies

Any public authority or state-managed body deploying a high-risk AI system is subject to the FRIA. This covers national and local government administrations, public establishments (hospitals, public universities, employment agencies...), social security bodies, judicial and law enforcement authorities, and border control agencies.

Operators of essential services

Private companies managing critical infrastructure or providing essential services are also concerned. This includes operators in energy, transport, water, digital health, or systemic banking who deploy high-risk AI systems.

Education and vocational training institutions

Schools, universities and training organisations that use AI systems to assess students, manage access to courses or guide learning pathways are subject to the FRIA if those systems fall under Annex III.

If you are a standard private company - not a critical infrastructure operator - simply using an AI-powered HR tool or a commercial scoring solution, you are not in principle subject to the FRIA. Your obligations are those of Article 26 (human oversight, logs, informing affected individuals) but not the formal assessment of Article 27.

FRIA vs DPIA: The Concrete Differences

  • Legal basis: the DPIA stems from Article 35 of the GDPR, the FRIA from Article 27 of the AI Act. These are two distinct obligations under two separate regulations.
  • Scope: the DPIA covers risks related to personal data processing only. The FRIA covers all fundamental rights, including those unrelated to privacy (non-discrimination, access to justice, workers' rights...).
  • Trigger: the DPIA is triggered by high-risk personal data processing. The FRIA is triggered by the deployment of an Annex III high-risk AI system by an affected deployer.
  • Possible coordination: the two assessments can be conducted jointly and their content may overlap - particularly on data protection and non-discrimination. The regulation actively encourages this coordination to avoid duplication.

How to Conduct a FRIA: The Steps

Step 1 - System and context description

Start by precisely documenting the AI system concerned: its purpose, simplified technical operation, the decisions it produces or influences, and the organisational context in which it is deployed. Who uses it? For what purpose? On which populations?

Step 2 - Identification of affected individuals

Determine who is affected by the system - directly (people subject to its decisions) or indirectly (people whose rights could be impacted without direct contact with the system). Pay particular attention to vulnerable groups: minors, elderly persons, persons with disabilities, minorities.

Step 3 - Mapping potentially affected fundamental rights

For each fundamental right listed in the Charter, assess whether that right can be affected by the system in your deployment context, the likelihood and severity of the potential impact, and whether the impact would be direct or indirect.

Step 4 - Identification and assessment of concrete risks

Translate potential impacts into concrete risks. For example: a credit scoring system may create a risk of indirect discrimination if its training data reflects historical biases related to gender or origin. A student assessment system may infringe the right to education if its errors cannot be challenged.

Step 5 - Mitigation measures

For each identified risk, document the measures in place or planned to mitigate it: human oversight of sensitive decisions, appeal and challenge mechanisms for affected individuals, bias detection and correction procedures, usage restrictions, staff training on system limitations and biases.

Step 6 - Monitoring and review plan

The FRIA is not a one-off exercise. Define a review schedule, the monitoring indicators you will track, and the conditions that would trigger a new assessment (significant system modification, change in deployment context, reported incidents...).

Step 7 - Registration and publication

FRIA results must be registered in the EU high-risk AI systems database (Article 71). Some information may be exempted from publication for confidentiality reasons, but the summary of results must be accessible.

Common Mistakes to Avoid

  • Conducting the FRIA after deployment: Article 27 is explicit - the assessment must take place before the system is put into service. A retrospective FRIA does not satisfy the legal obligation.
  • Limiting it to personal data: reducing the FRIA to an expanded DPIA is a frequent error. Non-discrimination, access to justice and workers' rights must be addressed even if the system processes no personal data.
  • Not involving affected individuals: consulting representatives of the populations concerned strengthens the quality of the assessment and its defensibility in the event of an audit.
  • Ignoring indirect risks: a system can affect rights without those individuals having direct contact with it. A social housing allocation algorithm affects the rights of people who never interact with the software directly.
  • Never revising it: a FRIA completed in 2025 for a system that has evolved significantly in 2026 is no longer valid. Plan periodic reviews.

The AiActo Deployer module guides organisations through their FRIA with more than 15 structured fields, covering all Article 27 requirements and generating an exportable document ready for regulatory registration.

Frequently Asked Questions

Is the FRIA mandatory for a private company using AI-powered HR software?

In principle, no, if the company is a standard commercial entity. Article 27 targets public law bodies, operators of essential services and education providers. An SME using an AI CV screening tool must comply with Article 26 obligations (human oversight, logs, informing candidates) but does not necessarily need to conduct a formal FRIA.

Can the FRIA be combined with the GDPR DPIA?

Yes, and the regulation encourages it. The two assessments can be conducted jointly if the AI system processes personal data. Their content will overlap notably on data protection, non-discrimination and security measures. The important thing is to ensure the FRIA covers the full scope of fundamental rights, beyond the DPIA's data protection focus.

What happens if the AI system is modified after the FRIA?

Any substantial modification of the AI system triggers an obligation to revise the FRIA. The regulation does not precisely define "substantial modification", but the general principle is that a change affecting the system's capabilities, training data or scope of application requires reassessment.

Does the FRIA need to be made public?

FRIA results must be registered in the EU high-risk AI systems database. Commercially sensitive information may be exempt from publication. The general rule is transparency, with limited exceptions to protect legitimate trade secrets.

What is the penalty for failing to conduct a FRIA?

Non-compliance with Article 27 obligations is an infringement of Chapter III of the AI Act, subject to fines of up to €15 million or 3% of global annual turnover. National supervisory authorities will have investigation and enforcement powers from August 2026.

The FRIA is one of the least well-known obligations of the AI Act - and yet one of the most significant for public bodies and essential services operators. Conducting it seriously, before deployment, is both a legal requirement and an act of accountability towards the people your AI systems affect. Check the AI Act timeline to plan this step in your compliance calendar.

Share this article