Malicious Use of AI: What the AI Act Covers and What It Doesn't

En bref
- 9 sub-risks identified: from disinformation to bioweapons, malicious uses of AI span a broad spectrum that the AI Act addresses very unevenly
- Only 1 risk extensively covered: state surveillance and oppression is subject to direct prohibitions (social scoring, predictive policing, biometric identification)
- 4 risks partially covered: disinformation, abusive content, fraud and cyberattacks are addressed indirectly — through transparency obligations or other EU legislation
- 4 risks not covered: bioweapons, intentional rogue AIs, autonomous weapons and concentration of power remain outside the AI Act's direct scope
- Impact for businesses: providers and deployers must go beyond the AI Act alone and integrate malicious use risks into their overall risk management
An analysis published in February 2026 by the Real Instituto Elcano raises a challenging question: does the AI Act, presented as the global model for AI regulation, actually cover the most critical risks? Paula Oliver Llorente's study stress-tests the regulation against 9 malicious use sub-risks — the intentional use of AI capabilities to cause harm. The finding is unequivocal: coverage is deeply unbalanced. For businesses subject to Regulation (EU) 2024/1689, this reality has direct implications for their risk management strategy.
Malicious use of AI: what are we talking about?
Malicious use of AI refers to intentional practices that exploit the capabilities of AI systems to compromise the security of individuals, groups or society. The defining element is the intent to cause harm, which differentiates malicious use from accidental misuse or unintended consequences.
This category is also distinct from malicious abuse, which exploits the internal vulnerabilities of AI systems themselves — such as adversarial attacks — rather than weaponising their capabilities for harmful purposes.
Drawing on the work of international AI safety organisations, institutional reports and documented incidents, the analysis identifies 9 malicious use sub-risks:
- Bioweapons and chemical threats — Using AI to design pathogens, conduct biological attacks or provide instructions for reproducing existing weapons
- Intentional rogue AIs — Creating and deploying autonomous systems pursuing destructive goals (e.g. ChaosGPT), capable of adapting without human oversight
- Disinformation and persuasive AIs — Generating false or misleading content at scale, personalised persuasion exploiting cognitive vulnerabilities, foreign influence operations
- Fake and abusive content — Non-consensual intimate imagery (NCII), AI-generated child sexual-abuse materials (CSAM), voice impersonation, blackmail, reputational damage
- Fraud, scams and social engineering — AI systems (WormGPT, FraudGPT) producing convincing phishing, identity theft and fraudulent chatbots
- Cyber offence — Automating malware generation, vulnerability discovery, multilingual phishing, lowering entry barriers for attackers
- Autonomous weapons and military use — Deploying AI-enabled drones and weapon systems capable of targeting and attacking without human oversight
- Concentration of power — Governments or corporations using AI to entrench authority, suppress dissent or monopolise AI capabilities
- State surveillance and oppression — Mass surveillance, predictive policing, censorship and repression of minorities via AI
What the AI Act covers: an unbalanced map
The Real Instituto Elcano analysis confronts the AI Act's provisions with these 9 sub-risks. The result is summarised below, reconstructing Figure 1 from the original study.
Table — Malicious use sub-risks and AI Act obligations
Legend: 🔴 No direct coverage — 🟡 Partial or indirect coverage — 🟢 Extensive coverage
- Bioweapons and chemical threats 🔴 — Only generic risk management and incident reporting provisions for GPAI models with systemic risks apply. The main safeguard remains international conventions (biological and chemical weapons)
- Intentional rogue AIs 🔴 — Same regulatory vacuum. Mitigation is limited to GPAI systemic risk management and human oversight obligations for high-risk AI systems. No specific provisions targeting the creation of destructive autonomous AI agents
- Disinformation and persuasive AIs 🟡 — The AI Act prohibits manipulative and deceiving techniques (Art. 5), requires labelling of deepfakes and synthetic content (Art. 50). But personalised persuasion via AI chatbots is not covered. Partially complemented by the Digital Services Act (DSA)
- Fake and abusive content 🟡 — Indirect prohibitions (exploitation of vulnerabilities, Art. 5). But the most serious forms — non-consensual intimate imagery (NCII), AI-generated CSAM — are absent from the AI Act. Deepfake labelling provides weak protection. Complemented by the Non-Consensual Intimate Image Directive
- Fraud and social engineering 🟡 — Transparency and disclosure requirements may reduce the effectiveness of phishing and impersonation, but do not prohibit these practices. No explicit regulation of AI fraud tools
- Cyber offence 🟡 — Covered mainly by pre-existing EU legislation criminalising cyberattacks. The AI Act focuses on protecting high-risk systems against adversarial attacks (malicious abuse) rather than offensive use of AI. Complemented by the Cyber Resilience Act
- Autonomous weapons and military use 🔴 — Explicitly excluded from the AI Act's scope (defence and national security are Member State competences). Only dual-use AI systems (civil and military) are covered. A major gap in a catastrophic risk area
- Concentration of power 🔴 — State use of AI is partially limited through restrictions on predictive policing. But corporate concentration of AI power (data advantages, cloud infrastructure, computing power) is unaddressed. The Digital Markets Act does not cover AI-specific concentration dynamics
- State surveillance and oppression 🟢 — The most extensive coverage. The AI Act bans social scoring (Art. 5), predictive policing, certain types of biometric identification and biometric categorisation. Reflects the political salience of this issue in EU debates, particularly in light of authoritarian precedents
Why these gaps are intentional
The AI Act's unbalanced coverage is not an oversight. It results from a deliberate design choice to avoid regulatory redundancy.
The AI Act sits within a broader European legislative corpus. The development of bioweapons, the conduct of fraud and cyberattacks were already criminalised before the emergence of AI. EU legislators decided that AI-enabled crimes should not be treated differently from their traditional counterparts, and that protections should not be redundant with existing legislation.
In practice, many sub-risks are complemented by other EU texts:
- Disinformation → Digital Services Act (DSA)
- Abusive content → Non-Consensual Intimate Image Directive
- Cyberattacks → Cyber Resilience Act
- Concentration of power → Digital Markets Act (DMA)
- Bioweapons → International conventions
- Autonomous weapons → International initiatives (UN)
This approach makes sense internally: it avoids overregulation and simplifies compliance. But it creates an exportability problem: third countries will not import the entire European regulatory ecosystem.
Cross-cutting limitations that weaken protection
Beyond coverage by sub-risk, two cross-cutting limitations weaken the AI Act against malicious use.
Personal use in a grey zone
Personal, non-professional uses of AI systems are not covered by the AI Act. The text relies on provider and developer obligations to limit downstream risks. Malicious individuals thus fall through the gaps, unless caught by criminal law. Yet AI amplifies both the incentives and the ease of malicious activity, making the prospect of criminal prosecution a weak deterrent.
'Reasonably foreseeable misuse': a vague concept
The AI Act establishes risk management obligations based on intended use and 'reasonably foreseeable misuse' for both high-risk AI systems and GPAI models with systemic risk. The problem is that this concept can have many interpretations, weakening enforcement consistency. Some companies are already exploiting this vagueness — as OpenAI demonstrated in court by arguing that using ChatGPT for self-harm constitutes 'misuse, unauthorised use, unintended use, unforeseeable use and/or improper use' of its product.
What this means for businesses
For providers and deployers of AI systems, this analysis has concrete implications.
Risk management cannot be limited to the AI Act
A risk management system compliant with Article 9 must identify 'known and reasonably foreseeable' risks. Malicious uses fall into this category — even if the AI Act does not address them all directly. Providers must therefore integrate malicious use risks into their technical documentation (Annex IV, Section 5), including those covered by other legislation.
Transparency obligations as the first line of defence
For the 4 partially covered sub-risks, the AI Act's transparency obligations (deepfake labelling, disclosure of human-machine interactions, deployer information) constitute the first line of defence. Their effective implementation is essential, even though they remain insufficient against determined malicious actors.
GPAI models with systemic risk on the front line
Providers of GPAI models with systemic risk (Articles 51-56) bear particular responsibility. Their risk management, adversarial testing and incident reporting obligations are the AI Act's only safety net for 4 of the 9 identified sub-risks. The quality of their implementation has a direct impact on the safety of the entire ecosystem.
Do you develop or deploy an AI system and want to assess your exposure to malicious use risks? The free AiActo diagnostic identifies your risk level and obligations in under 3 minutes.
The Brussels Effect in question
The AI Act was designed with the ambition of becoming the global model for AI governance — the so-called 'Brussels Effect'. The Council of the EU presented it as potentially setting a global standard for AI regulation.
Yet the analysis shows that this ambition is undermined by risk coverage gaps. A regulatory framework that leaves AI-assisted bioweapons, destructive autonomous systems and technological power concentration outside its direct scope loses credibility as a global model.
The Digital Omnibus, proposed by the Commission in November 2025, worsens the problem: it delays the entry into force of certain obligations for high-risk systems, introduces transitional periods for GPAI watermarking and broadens the data usable for training — potentially increasing the effectiveness of disinformation and social engineering systems.
For European businesses committed to compliance, the signal is ambiguous: on one hand, the August 2026 deadlines are approaching; on the other, the regulatory framework itself is in flux.
Practical recommendations
In this context, three courses of action emerge for organisations.
- Integrate malicious use risks into Article 9 risk analysis — Do not limit yourself to intended uses. Explicitly document malicious use scenarios and mitigation measures, including for risks covered by other legislation
- Monitor the complete regulatory ecosystem — The AI Act does not function in isolation. Identify applicable complementary texts (DSA, Cyber Resilience Act, NCII Directive, DMA) and integrate their requirements into overall compliance strategy
- Anticipate AI Act revisions — Article 112 provides for periodic revisions. The list of high-risk systems (Annex III) can be modified through Delegated Acts. Businesses should track the evolution of the framework, particularly on personalised persuasion and AI-generated content
Compliance with the AI Act is a necessary step, but not sufficient. Organisations that take a broad view of risks — including malicious uses beyond the regulation's scope alone — will be better prepared, both for regulators and the real threats that AI amplifies.
Frequently asked questions
Does the AI Act ban deepfakes?
The AI Act does not ban deepfakes as such. Article 50 imposes transparency obligations: synthetic content (images, audio, video) must be labelled in a machine-readable way. Human-machine interactions must be disclosed. But these obligations only apply to providers and deployers, not to individuals using AI personally.
Are AI-assisted cyberattacks covered by the AI Act?
Indirectly. The AI Act protects high-risk systems against adversarial attacks (Art. 15 — cybersecurity). But the offensive use of AI to conduct cyberattacks is covered by existing criminal law and the Cyber Resilience Act, not by the AI Act itself.
Why are autonomous weapons excluded from the AI Act?
Defence and national security are Member State competences within the EU, not Union competences. The AI Act explicitly excludes AI systems developed or used exclusively for military purposes. Only dual-use systems (civil and military) are covered.
What is 'reasonably foreseeable misuse' in the AI Act?
It is a central concept in Article 9 on risk management. The provider must identify and mitigate risks related not only to the intended use of their system, but also to any misuse they can reasonably anticipate. The vagueness of this concept creates uncertainties in interpretation and enforcement.
Does the Digital Omnibus weaken malicious risk coverage?
Yes, according to several analyses. The Digital Omnibus, proposed in November 2025, delays certain obligations for high-risk AI systems (up to 16 additional months), introduces transitional periods for GPAI watermarking and broadens training data allowances. These measures could increase the incidence of disinformation and social engineering systems.
How do I integrate malicious use risks into my technical documentation?
Section 5 of Annex IV (risk management system) requires the identification of 'known and reasonably foreseeable' risks. Document relevant malicious use scenarios for your system, the mitigation measures adopted and the residual risks accepted. Platforms like AiActo structure this process section by section.
The AI Act is a major step forward for AI governance, but its coverage of malicious use risks remains incomplete. Businesses cannot simply tick the regulation's boxes: they must take a broader view of risks, integrating the entire European regulatory ecosystem and the specific threats that AI amplifies. Only then will compliance translate into real protection.