The EU AI Act, which entered into force on August 1, 2024, began enforcing prohibitions on certain AI systems from February 2, 2025. Article 5 of the EU AI Act establishes eight categories of prohibited AI practices that pose unacceptable risks to fundamental rights and EU values. These prohibitions represent the world’s first comprehensive legal framework banning specific AI applications due to their potential for harm.

Key Takeaways:

  • February 2, 2025 marked the first compliance deadline for prohibited AI systems
  • Maximum penalties reach €35 million or 7% of worldwide annual turnover, whichever is higher
  • Eight distinct categories of AI practices are banned across the EU
  • Limited exceptions exist for law enforcement and medical/safety purposes
  • The AI Act applies to providers and deployers regardless of their location if AI outputs are used in the EU

Understanding Prohibited AI Systems

Legal Framework and Timeline

The EU AI Act takes a risk-based approach to AI regulation, categorising systems into prohibited, high-risk, and transparency-obligated categories. The prohibited practices under Article 5 are considered harmful and abusive as they contradict Union values, the rule of law, and fundamental rights.

EU AI Act Implementation Timeline

DateApplicable ProvisionsDescription
August 1, 2024Entry into ForceEU AI Act enters into force across all 27 EU Member States
February 2, 2025Prohibited Systems BanBan on AI systems posing unacceptable risks and introduction of AI literacy requirements
August 2, 2025Governance & PenaltiesGovernance rules, GPAI model obligations, and penalty frameworks become effective
August 2, 2026High-Risk SystemsMost other AI Act obligations become effective, including high-risk system requirements
August 2, 2027Extended TransitionGPAI models placed on the market before August 2, 2025 must achieve compliance by this date

Key Definitions

Prohibited AI Practice: An AI system whose placing on the market, putting into service, or use is banned under Article 5 of the EU AI Act due to unacceptable risks to fundamental rights.

AI System: A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and infers how to generate outputs from inputs received.

Provider: A natural or legal person, public authority, agency, or other body that develops an AI system or general-purpose AI model or has it developed and places it on the market or puts it into service under its name or trademark.

Deployer: A natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

Real-time Remote Biometric Identification: The use of an AI system to identify persons without a significant delay by comparing biometric data from a person present in a publicly accessible space with biometric data contained in a reference database.


The Eight Categories of Prohibited AI Systems

1. Subliminal and Manipulative Techniques

AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques are prohibited when they materially distort behavior and cause significant harm.

Examples:

  • Voice-activated toys that encourage dangerous behavior in children
  • AI systems using subliminal audio or visual cues to influence purchasing decisions
  • Apps that exploit psychological vulnerabilities to promote addictive behaviors

2. Exploitation of Vulnerabilities

AI systems that exploit vulnerabilities of natural persons due to their age, disability, or specific social or economic situation with the objective of materially distorting their behavior are banned.

Vulnerable Groups Protected:

  • Children and elderly individuals
  • Persons with disabilities
  • Economically disadvantaged populations
  • Individuals in specific social situations

3. Social Scoring Systems

AI systems for evaluation or classification of natural persons based on social behavior or personal characteristics are prohibited when leading to detrimental treatment in unrelated contexts.

Prohibited Social Scoring Elements

Prohibited ElementDescriptionExamplesCross-Context Penalties
Detrimental treatment in social contexts unrelated to where data was originally generatedCredit scoring based on social media activityUp to €35 million or 7% of turnover
Disproportionate TreatmentTreatment unjustified or disproportionate to social behavior or its gravityEmployment restrictions based on minor traffic violationsUp to €35 million or 7% of turnover
Comprehensive ProfilingSystematic evaluation of personal characteristics over timeGovernment citizen scoring systemsUp to €35 million or 7% of turnover
Behavioral PredictionAssessment based on inferred personality traitsSocial trust scoring for service accessUp to €35 million or 7% of turnover

4. Predictive Criminal Risk Assessment

AI systems for making risk assessments to predict criminal behavior based solely on profiling or personality trait assessment are prohibited. However, systems supporting human assessment based on objective, verifiable facts linked to criminal activity are allowed.

5. Facial Recognition Database Expansion

AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage are banned.

Prohibited Activities:

  • Scraping social media platforms for facial images
  • Harvesting faces from public CCTV systems
  • Building databases from internet image searches
  • Unauthorised collection from public spaces

6. Emotion Recognition in Workplaces and Schools

AI systems that infer emotions of natural persons in workplace and education institutions are prohibited, except for medical or safety reasons.

Emotion Recognition Restrictions

ContextProhibition StatusPermitted Exceptions
WorkplaceProhibitedMedical or safety reasons only
Educational InstitutionsProhibitedMedical or safety reasons only
Healthcare SettingsPermittedWhen used for medical purposes
Safety ApplicationsPermittedDriver fatigue detection, safety monitoring
Public SpacesNot specifically regulated under this provisionSubject to other regulations

7. Biometric Categorisation for Sensitive Attributes

Biometric categorisation systems that categorise individuals based on biometric data to deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are prohibited.

Prohibited Inferences:

  • Racial or ethnic classification
  • Political opinion assessment
  • Religious belief determination
  • Sexual orientation prediction
  • Trade union membership identification

8. Real-Time Remote Biometric Identification

Real-time remote biometric identification systems in publicly accessible spaces for law enforcement are generally prohibited, with limited exceptions for specific objectives.

Permitted Exceptions for Real-Time Biometric Identification

Permitted ObjectiveRequirementsSafeguards
Targeted search for trafficking victimsPrior judicial or administrative authorisationFundamental rights impact assessment
Prevention of imminent threat to life or terrorist attackRegistration in EU databaseTemporal, geographic, and personal limitations
Identification of criminal suspects for serious offensesCrimes punishable by at least 4 years of custody24-hour authorization deadline in urgent cases

Legal Requirements and Compliance

Enforcement Timeline

While the prohibited practices took effect on February 2, 2025, the penalty provisions only come into force on August 2, 2025. This creates a transitional period where violations are legally prohibited but formal penalties are not yet applicable.

Penalty Framework

Fines associated with breach of Article 5 may be a maximum of €35 million or 7% of total worldwide annual turnover, whichever is higher.

Compliance Requirements by Actor Type

Actor TypePrimary ObligationsTimelinePenalties
AI System ProvidersEnsure systems do not involve prohibited AI practicesFebruary 2, 2025Up to €35 million or 7% of turnover
AI System DeployersVerify that deployed systems comply with prohibitionsFebruary 2, 2025Up to €35 million or 7% of turnover
Law EnforcementObtain prior authorization for biometric identificationFebruary 2, 2025Administrative sanctions
Educational InstitutionsAvoid using emotion recognition except for medical or safety purposesFebruary 2, 2025Administrative sanctions
EmployersProhibit workplace emotion recognition except for medical or safety purposesFebruary 2, 2025Administrative sanctions

AI Literacy Requirements

Article 4 requires providers and deployers to take suitable measures ensuring their personnel have sufficient AI literacy to operate AI systems and understand opportunities, risks, and potential harms.


Global Impact and Extraterritorial Application

Territorial Scope

The EU AI Act applies to providers and deployers of AI systems in third countries if the output produced by the AI system is being used in the EU. This extraterritorial application means:

  • Non-EU companies providing AI services to EU users must comply
  • Cloud-based AI systems used by EU entities are covered
  • AI outputs consumed in the EU trigger compliance obligations
  • Supply chain partners may need to verify compliance

International Business Implications

Organisations worldwide must assess whether their AI systems:

  1. Are used by EU-based clients or customers
  2. Process data of EU residents
  3. Provide outputs consumed within EU territory
  4. Fall under any prohibited categories

Recent Developments and Guidelines

European Commission Guidelines

The European Commission published guidelines on prohibited artificial intelligence practices to ensure consistent, effective, and uniform application of the AI Act across the European Union. While these guidelines offer valuable insights into the Commission’s interpretation of the prohibitions, they are non-binding, with authoritative interpretations reserved for the Court of Justice of the European Union (CJEU).

Ongoing Stakeholder Consultation

The European AI Office launched a stakeholder consultation in November 2024 on prohibited practices, with responses informing preparation of European Commission guidelines on the definition of AI systems and prohibited practices.


Industry-Specific Impacts

Technology Sector

  • Social media platforms must review recommendation algorithms for manipulative techniques
  • Dating apps cannot use AI to exploit emotional vulnerabilities
  • Gaming companies must avoid systems that manipulate user behaviour through subliminal techniques

Financial Services

  • Credit scoring systems cannot use social media data for cross-context evaluation
  • Insurance companies cannot use AI to predict criminal behavior for risk assessment
  • Banks must avoid emotion recognition in customer service applications

Healthcare and Education

  • Medical institutions can use emotion recognition for legitimate medical purposes
  • Schools cannot deploy emotion recognition for behavioral monitoring
  • Educational technology must avoid exploiting student vulnerabilities

Law Enforcement

  • Police must obtain judicial authorisation for real-time facial recognition
  • Predictive policing systems cannot rely solely on personality profiling
  • Criminal risk assessments must be based on objective, verifiable facts

Frequently Asked Questions (FAQ)

What are prohibited systems under the EU AI Act?

Prohibited systems under the EU AI Act are eight categories of AI practices banned under Article 5 due to their unacceptable risks to fundamental rights. These include manipulative AI, social scoring systems, predictive criminal risk assessment, facial recognition database expansion, workplace emotion recognition, biometric categorisation for sensitive attributes, and real-time biometric identification in public spaces.

When did the EU AI Act prohibitions take effect?

The prohibitions on AI systems posing unacceptable risks took effect on February 2, 2025. However, penalty provisions for breaches only come into force on August 2, 2025.

Do EU AI Act prohibitions apply to non-EU companies?

Yes, the EU AI Act applies to providers and deployers of AI systems in third countries if the output produced by the AI system is being used in the EU. This means non-EU companies must comply if their AI systems serve EU users or markets.

What are the penalties for using prohibited AI systems?

Fines for violating prohibited AI practices may be a maximum of €35 million or 7% of total worldwide annual turnover, whichever is higher. These represent some of the highest penalties in EU technology regulation.

Are there any exceptions to the prohibited AI systems?

Yes, there are limited exceptions:

  • Emotion recognition is permitted for medical or safety reasons
  • Real-time biometric identification has exceptions for searching trafficking victims, preventing terrorist attacks, and investigating serious crimes
  • Criminal risk assessment systems can support human assessment based on objective, verifiable facts

What is considered a manipulative AI system under the EU AI Act?

Manipulative AI systems are those that deploy subliminal techniques beyond consciousness or purposefully manipulative/deceptive techniques to materially distort behavior and cause significant harm. Examples include voice-activated toys encouraging dangerous behavior or apps exploiting psychological vulnerabilities.

How do social scoring prohibitions work in practice?

Social scoring systems are prohibited when they evaluate persons based on social behaviour or personal characteristics, leading to detrimental treatment in unrelated contexts or disproportionate treatment. This bans comprehensive citisen scoring systems and cross-context penalties.

What constitutes facial recognition database expansion under the prohibition?

Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage is prohibited. This includes harvesting faces from social media, public cameras, or internet image searches.

Can employers use emotion recognition AI in the workplace?

AI systems that infer emotions in workplace contexts are prohibited, except where intended for medical or safety reasons. This means general employee monitoring through emotion recognition is banned, but safety applications like fatigue detection may be permitted.

What biometric categorisation is prohibited under the EU AI Act?

Biometric categorisation systems that deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation from biometric data are prohibited. This protects sensitive personal attributes from AI-based inference.

How does the real-time biometric identification prohibition work?

Real-time remote biometric identification in publicly accessible spaces for law enforcement is generally prohibited, with exceptions for searching trafficking victims, preventing imminent threats, and investigating serious crimes. Such use requires prior judicial authorisation, fundamental rights impact assessment, and EU database registration.

What are AI literacy requirements under the EU AI Act?

Providers and deployers must take measures ensuring their personnel have sufficient AI literacy to operate AI systems and understand opportunities, risks, and potential harms. This became effective February 2, 2025.

Are there transitional provisions for existing AI systems?

Providers of GPAI models placed on the EU market before August 2, 2025 have until August 2, 2027 to achieve compliance. However, prohibited practices must be discontinued immediately from February 2, 2025.

How are prohibited AI systems enforced across EU member states?

By August 2, 2025, EU Member States must designate national authorities responsible for AI Act enforcement. The European AI Office and national market surveillance authorities share enforcement responsibilities.

What should companies do to ensure compliance with prohibited systems rules?

Companies should:

  1. Audit existing AI systems against the eight prohibited categories
  2. Implement AI literacy programs for personnel
  3. Establish compliance monitoring processes
  4. Review AI applications for extraterritorial EU impact
  5. Prepare for August 2025 penalty enforcement

Final Thoughts

The EU AI Act’s prohibition framework, eProhibited systems under AI Act – prohibited practices under AI Act effective from February 2, 2025, represents the world’s first comprehensive ban on AI practices deemed to pose unacceptable risks. With maximum penalties reaching €35 million or 7% of worldwide turnover, organisations globally must ensure compliance when serving EU markets.

The eight categories of prohibited AI systems address fundamental concerns about AI’s potential to manipulate, exploit, and discriminate. While exceptions exist for legitimate law enforcement, medical, and safety purposes, the default position is prohibition of these high-risk practices.

As European Commission guidelines continue to develop and enforcement mechanisms strengthen, organisations should proactively assess their systems, implement compliance programs, and prepare for the evolving regulatory landscape.

Key Actions for Compliance:

  1. Conduct immediate audits of AI systems against prohibited categories
  2. Implement AI literacy training for relevant personnel
  3. Establish ongoing monitoring for new AI deployments
  4. Prepare compliance documentation for enforcement authorities
  5. Monitor European Commission guidelines and member state implementations

The EU AI Act’s prohibited systems framework sets a global precedent for AI governance, emphasising the protection of fundamental rights while enabling beneficial AI innovation. Success requires understanding not just what is prohibited, but why these practices pose unacceptable risks to European values and human dignity.


This article was prepared based on the EU AI Act (Regulation (EU) 2024/1689) and related European Commission guidance. Organisations should consult with legal experts for specific compliance advice.


Sources and Citations:

Last updated: November 2025

Yuliia Habriiel
+ posts

Yuliia Habriiel is the co-Founder and CEO of eyreACT, combining a legal background with experience in SaaS, legaltech, and fintech across Europe and North America. Originally from Ukraine, she drives eyreACT’s mission to make AI compliance practical, enabling teams to innovate responsibly.


Leave a Reply

Your email address will not be published. Required fields are marked *