Article 5 of the EU Artificial Intelligence Act became enforceable on February 2, 2025, establishing categorical prohibitions on AI practices that present significant risks to individuals, society, or fundamental EU values. This landmark regulation identifies eight specific AI practices that are completely banned within the European Union, with penalties reaching up to €35 million or 7% of global annual turnover for violations.
The prohibited practices include AI systems that manipulate people’s decisions or exploit their vulnerabilities, systems that evaluate or classify people based on their social behavior or personal traits, and systems that predict a person’s risk of committing a crime.
The European Commission has since published comprehensive guidelines to ensure consistent, effective, and uniform application across all EU Member States. Let’s go through the guidance and understand what it means to AI providers, deployers, and users.
What is Article 5 of the EU AI Act?
Article 5 represents the “unacceptable risk” category within the EU AI Act’s risk-based framework. It establishes categorical prohibitions on artificial intelligence practices that present significant risks to individuals, society, or the fundamental values of the European Union, specifically designed to eliminate AI applications deemed harmful while safeguarding fundamental rights.
Key Objectives of Article 5
- Protect fundamental rights: Aligns with principles enshrined in the EU Charter of Fundamental Rights, including the right to privacy and prohibition of exploitation of vulnerable groups
- Prevent manipulation: Eliminates AI systems designed to distort human behaviour without consent
- Safeguard vulnerable populations: Protects children, elderly, and disabled individuals from AI exploitation
- Maintain democratic values: Prevents social scoring and discriminatory profiling systems
The 8 Prohibited AI Practices Under Article 5
1. Subliminal and Manipulative AI Systems (Article 5.1.a)
Prohibition: AI systems that use subliminal techniques to influence users’ consciousness, extending to manipulative and deceptive techniques that may result in distortion of a person’s ability to make informed decisions
Key Elements:
- Subliminal techniques beyond conscious awareness
- Purposeful manipulation or deception
- Material distortion of behavior
- Impairment of informed decision-making
Examples:
- Hidden audio/visual stimuli in advertising
- Covert psychological manipulation in user interfaces
- Deceptive design patterns in AI-powered platforms
2. Exploitation of Vulnerabilities (Article 5.1.b)
Prohibition: AI systems that exploit vulnerabilities of natural persons or specific groups, including their age, disability, social/economic situation, or association with a group in a manner that may cause harm
Protected Vulnerabilities:
- Age (children and elderly)
- Disabilities (physical or mental)
- Socio-economic circumstances
- Group associations
Examples:
- AI systems promoting high-interest credit products to financially desperate individuals
- Targeting children with addictive gaming mechanics
- Exploiting cognitive disabilities for commercial gain
3. Biometric Categorisation (Article 5.1.c)
Prohibition: AI systems that use biometric categorisation to categorise natural persons based on their biometric data to deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation
Prohibited Inferences:
- Race or ethnic origin
- Political opinions
- Trade union membership
- Religious or philosophical beliefs
- Sexual orientation
Exception: Labeling and filtration of lawfully acquired biometric datasets by law enforcement agencies
4. Social Scoring (Article 5.1.c)
Prohibition: AI systems used for evaluation or classification of people based on their social behavior or known, inferred, or predicted personal or personality characteristics where such social scoring leads to detriment or unfavorable treatment
Prohibited When:
- Treatment occurs in unrelated social contexts
- Treatment is unjustified or disproportionate
- Results in systematic disadvantage
5. Risk Assessment for Criminal Behavior (Article 5.1.d)
Prohibition: AI systems for making risk assessments of natural persons to assess or predict the risk of committing a criminal offense, based solely on profiling or assessing personality traits and characteristics
Exception: AI systems used to support human assessment of involvement in criminal activity, based on objective and verifiable facts directly linked to criminal activity
6. Facial Recognition Database Creation (Article 5.1.e)
Prohibition: AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage
Key Aspects:
- Untargeted collection (not specific individuals)
- Internet scraping
- CCTV footage harvesting
- Database expansion for facial recognition
7. Emotion Recognition in Workplace/Education (Article 5.1.f)
Prohibition: AI systems to infer emotions of natural persons in workplace or educational institutions
Exception: AI systems used for medical or safety reasons
Applications:
- Employee mood monitoring for performance evaluation
- Student emotional state tracking for discipline
- Workplace surveillance for emotional compliance
8. Real-Time Remote Biometric Identification (Article 5.1.h)
Prohibition: Real-time remote biometric identification systems in publicly accessible spaces for law enforcement
Limited Exceptions for Law Enforcement:
- Searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited
- Preventing substantial and imminent threat to life, or foreseeable terrorist attack
- Identifying suspects in serious crimes (murder, rape, armed robbery, narcotic and illegal weapons trafficking, organized crime, environmental crime)
EU AI Act Article 5: Prohibited Practices at a Glance
| Practice | Article Reference | Core Prohibition | Key Exceptions |
|---|---|---|---|
| Subliminal Manipulation | 5.1.a | Subliminal techniques, deceptive manipulation | None |
| Vulnerability Exploitation | 5.1.b | Exploiting age, disability, socio-economic status | None |
| Biometric Categorisation | 5.1.c | Inferring sensitive attributes from biometrics | Law enforcement datasets |
| Social Scoring | 5.1.c | Evaluating/classifying based on social behaviour | None |
| Criminal Risk Assessment | 5.1.d | Predicting criminal behaviour from profiling | Human-supported systems with objective facts |
| Facial Database Creation | 5.1.e | Untargeted scraping for facial recognition | None |
| Emotion Recognition | 5.1.f | Inferring emotions in workplace/education | Medical or safety purposes |
| Real-Time Biometric ID | 5.1.h | Live biometric identification in public spaces | Specific law enforcement purposes |
EU AI Act Article 5 Compliance Requirements and Enforcement
Enforcement Timeline
Article 5 prohibited practices became enforceable on February 2, 2025, marking the first major compliance deadline under the EU AI Act.
Penalties for Non-Compliance
Organizations violating Article 5 face substantial financial penalties, including fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher.
Enforcement Authorities
Member States are required to establish competent authorities for oversight and enforcement, ensuring compliance with the Act’s requirements. These authorities have the power to investigate breaches, impose sanctions, and provide guidance on best practices for AI governance.
Article 5: Scope and Definitions
Material Scope: Key Actions Covered
Placing on the Market Refers to the first availability of an AI system on the EU market, meaning supplying the system for distribution or use in the EU during commercial activities, whether paid or free.
Putting into Service Supplying an AI system for first use directly to a deployer or for own use within the EU for the system’s intended purpose.
Use Should be broadly interpreted to include any deployment or integration of the system at any point in its lifecycle after being placed on the market or put into service.
Personal Scope: Who is Covered?
Providers Entities that develop AI systems or have them developed and then place them on the Union market or put them into service in the EU under their name or trademark.
Deployers Use AI systems under their authority in the context of professional activities. Authority implies responsibility for deploying the system and how it is used.
Geographic Scope Providers outside the EU are also subject to the AI Act if they place systems on the market or put them into service in the European Union or if the AI system’s output is used within the EU.
European Commission Guidelines (February 2025)
The European Commission issued comprehensive guidelines on February 4, 2025, extending beyond the AI Act by defining vague legal concepts, establishing enforcement principles, introducing compliance mechanisms, clarifying permissible and prohibited AI applications, and outlining judicial oversight measures.
Key Clarifications from Guidelines
Personalised advertising The use of AI to personalise ads based on user preferences is “not inherently manipulative” so long as it does not deploy subliminal techniques or exploit vulnerabilities.
Lawful persuasion An AI system is likely to engage in “lawful persuasion” where it operates transparently, facilitates free and informed consent, and complies with relevant legal and regulatory frameworks.
Predictive policing The prohibition on predictive policing applies to law enforcement but can also apply to private actors requested to act on behalf of law enforcement.
What Does Article 5 Mean for Industry and Business?
Most Relevant Prohibitions for Businesses
- Marketing and advertising: Restrictions on manipulative AI in consumer-facing applications
- Human resources: Limitations on emotion recognition and behavioral profiling
- Financial services: Prohibitions on exploiting financial vulnerabilities
- Social media: Restrictions on manipulation and vulnerability exploitation
- EdTech: Emotion recognition limitations in educational settings
Article 5 Compliance Recommendations
Immediate Actions:
- Audit existing AI systems against Article 5 prohibitions
- Implement AI literacy training for relevant staff
- Develop internal compliance procedures
- Establish monitoring and documentation processes
Ongoing Requirements:
- Regular compliance assessments
- Staff training updates
- Documentation maintenance
- Regulatory monitoring
Key Definitions for EU AI Act – Article 5
| Term | Definition |
|---|---|
| AI System | Machine-based system that infers outputs like predictions, content, recommendations, or decisions for explicit or implicit objectives |
| Provider | Entity that develops, has developed, or places AI systems on the EU market |
| Deployer | Entity that uses AI systems under their authority for professional activities |
| Real-time | Processing without delay or with minimal delay |
| Biometric Data | Personal data from technical processing relating to physical, physiological, or behavioral characteristics |
| Subliminal Techniques | Methods operating beyond conscious awareness to influence behavior |
| Vulnerable Groups | Persons with specific characteristics making them susceptible to manipulation |
| Social Scoring | Evaluation or classification based on social behavior or personal traits |
Frequently Asked Questions (FAQ)
When did Article 5 prohibitions become enforceable?
A: Article 5 prohibitions became enforceable on February 2, 2025.
What are the penalties for violating Article 5?
Fines up to EUR 35 million or 7% of global annual turnover, whichever is higher.
Do these prohibitions apply to open-source AI systems?
Generally no, but the exemption does not apply if the AI systems are marketed or deployed as high-risk AI systems or fall under Article 5 prohibitions.
Can AI be used for personalised advertising?
Yes, personalised advertising based on user preferences is “not inherently manipulative” if it doesn’t use subliminal techniques or exploit vulnerabilities.
Are there exceptions for law enforcement use of biometric identification?
Yes, real-time biometric identification is allowed for searching missing persons, preventing imminent threats, and identifying suspects in serious crimes, with strict safeguards.
What constitutes “manipulation” under Article 5?
Manipulation involves subliminal techniques, purposeful deception, or exploiting vulnerabilities that materially distort behaviour and impair informed decision-making.
How do I know if my AI system violates Article 5?
Companies should thoroughly assess, on a case-by-case basis, whether the specific AI application is deemed prohibited under Article 5.
What is required for emotion recognition exceptions?
Emotion recognition in workplace or educational settings is only permitted for medical or safety reasons.
Do Article 5 prohibitions apply to AI research?
The prohibitions apply to placing on the market, putting into service, or using AI systems. Research activities may be exempt depending on context and purpose.
How often will Article 5 be updated?
The European Commission will assess the need to amend the prohibited practices list once a year and share findings with EU lawmakers.
Related EU AI Act Provisions
- Article 4: AI Literacy Requirements (also effective February 2, 2025)
- Article 6: High-Risk AI Systems Classification
- Article 27: Fundamental Rights Impact Assessment
- Article 49: EU Database Registration Requirements
- Article 96: Commission Guidelines Authority
- Article 99: Penalties and Enforcement Measures
Implementation Timeline
| Date | Milestone |
|---|---|
| August 1, 2024 | EU AI Act entered into force |
| February 2, 2025 | Article 5 prohibitions + AI literacy requirements enforceable |
| August 2, 2025 | High-risk AI system obligations + governance structures active |
| August 2, 2026 | Full AI Act implementation (all remaining provisions) |
Final Thoughts
Article 5 of the EU AI Act represents a foundational pillar in the world’s first comprehensive AI regulation framework. By explicitly delineating prohibited practices, it ensures regulatory clarity while reinforcing protection of human dignity, non-discrimination, and personal autonomy.
Organizations operating in or serving the EU market must prioritize compliance with these prohibitions to avoid severe penalties and reputational damage. The February 2025 enforcement date marks the beginning of a new era in AI governance, where fundamental rights protection takes precedence over technological innovation without ethical guardrails.
Key Takeaways:
- Eight specific AI practices are completely banned in the EU
- Enforcement began February 2, 2025 with severe penalties
- Compliance requires ongoing assessment and documentation
- Commission guidelines provide detailed implementation guidance
- Both EU and non-EU entities are covered if AI outputs are used in the EU
For organisations developing or deploying AI systems, understanding and complying with Article 5 prohibitions is not optional—it’s a business-critical requirement for operating in the European market.
This guide is based on Regulation (EU) 2024/1689 and European Commission Guidelines published February 2025. For the most current information, consult the official EU AI Act text and latest Commission guidance.


Leave a Reply