LAWCIS Legal Intelligence.
AI Compliance. Real Impact.
Lawcis Research and Consultancy Limited
Lawcis
EU AI Act Commentary
EU AI Act Commentary

Prohibited AI Practices Under the EU AI Act: What Businesses Need to Know

Article 5 of the EU AI Act prohibits certain AI practices that present unacceptable risks for the fundamental rights of individuals.

This article forms part of Lawcis commentary on the EU AI Act. For the full annotated reference work, see EU AI Act Explained.

The EU Artificial Intelligence Act (EU AI Act) is the most comprehensive attempt yet to regulate artificial intelligence. As AI systems become increasingly embedded in business operations, government services, and everyday life, the European Union has introduced a legal framework designed to protect fundamental rights while still encouraging innovation.

One of the most important features of the EU AI Act is its risk-based regulatory structure. Instead of regulating all AI technologies equally, the legislation classifies AI systems according to the level of risk they pose.

At the highest level are prohibited AI practices, which the law considers to present unacceptable risks to individuals and society. These practices are banned entirely under Article 5 of the EU AI Act.

For businesses developing or deploying AI in the European market, understanding these prohibitions is essential. Violating the rules can lead to fines of up to €35 million or 7% of global annual turnover.

This article explains the main prohibited AI practices and why they matter for companies operating in the EU.

The EU AI Act’s Risk-Based Approach

The EU AI Act divides AI systems into four categories:

  1. Unacceptable Risk (Prohibited AI Systems)
  2. High-Risk AI Systems (Strictly regulated)
  3. Limited-Risk AI Systems (Transparency obligations)
  4. Minimal-Risk AI Systems (largely unrestricted)

Only a small number of AI practices fall into the unacceptable risk category, but they are among the most controversial and important elements of the legislation.

Article 5 identifies several AI uses that are considered fundamentally incompatible with EU values such as human dignity, privacy, fairness, and democracy.

1. AI Systems that Manipulate Human Behaviour

The AI Act prohibits AI systems that manipulate individuals in ways that they cannot reasonably detect or resist.

This includes systems using subliminal techniques or deceptive methods to influence behaviour in ways that cause harm.

For example, an AI application that subtly manipulates user interfaces or psychological triggers to push vulnerable individuals into harmful financial decisions could fall within this category.

The key legal issue is whether the AI system materially distorts a person’s behaviour, potentially leading to significant harm.

This provision reflects growing concerns about algorithmic manipulation, particularly in advertising, online platforms, and political communication.

2. AI that Exploits Vulnerable Individuals

Another prohibited category involves AI systems that exploit people who are particularly vulnerable.

The EU AI Act specifically mentions vulnerability due to:

  • Age (such as children or the elderly)
  • Disability
  • Economic circumstances
  • Social situation

For instance, an AI system designed to pressure elderly users into purchasing expensive services or products could be considered illegal under the Act.

This prohibition recognises that AI technologies can exploit behavioural patterns and psychological weaknesses at scale.

3. AI-Based Social Scoring

One of the most widely discussed elements of the AI Act is the prohibition of AI-driven social scoring systems.

Social scoring involves analysing a person’s behaviour or characteristics and assigning a score that determines how they are treated by organisations or public authorities.

Examples could include:

  • Scoring individuals based on personal behaviour
  • Ranking citizens according to perceived trustworthiness
  • Penalising individuals in unrelated contexts based on past conduct

The EU considers such systems to be incompatible with fundamental rights because they can lead to unfair discrimination and social exclusion.

4. Predictive Policing Based Solely on Profiling

The AI Act also prohibits AI systems that attempt to predict criminal behaviour based solely on profiling individuals.

This includes systems that rely primarily on factors such as:

  • Personal characteristics
  • Behaviour patterns
  • Demographic data

The EU considers this type of predictive policing problematic because it risks reinforcing bias and undermining the presumption of innocence.

However, AI tools that assist law enforcement using objective evidence linked to criminal activity may still be permitted under certain conditions.

5. Mass Collection of Facial Recognition Data

Another important prohibition concerns the creation of large facial recognition databases through untargeted scraping of images.

Some AI developers collect facial images from social media platforms, websites, or surveillance cameras to train recognition systems.

Under the EU AI Act, collecting biometric data in this indiscriminate way is banned because it violates privacy rights and may lead to mass surveillance.

This rule will significantly affect companies developing facial recognition technologies.

6. Emotion Recognition in Workplaces and Schools

Emotion recognition systems attempt to identify emotional states such as stress, happiness, or engagement by analysing facial expressions, voice patterns, or biometric signals.

The EU AI Act prohibits the use of these systems in workplaces and educational institutions.

The reason is that such technologies can intrude deeply into personal privacy and may be used to monitor employees or students in ways that undermine autonomy and dignity.

Exceptions may exist for medical or safety purposes, but these will be strictly limited.

7. Biometric Categorisation Using Sensitive Traits

The AI Act also prohibits AI systems that classify individuals using biometric data linked to sensitive characteristics.

These include attempts to infer traits such as:

  • Race or ethnicity
  • Religious beliefs
  • Political opinions
  • Sexual orientation

Such systems could lead to discrimination or profiling at scale, which is why the EU considers them unacceptable.

8. Real-Time Biometric Surveillance in Public Spaces

The use of real-time facial recognition in public spaces by law enforcement is another controversial area.

In general, the EU AI Act prohibits such systems because they enable large-scale monitoring of the public.

However, limited exceptions exist for situations such as:

  • Searching for missing persons
  • Preventing terrorist attacks
  • Investigating serious crimes

Even in these cases, strict safeguards and authorisation procedures are required.

Why Businesses Must Pay Attention

The EU AI Act applies not only to companies based in Europe but also to organisations outside the EU that provide AI systems in the European market.

This means that developers, software companies, technology providers, and businesses deploying AI tools must carefully evaluate whether their systems fall within the prohibited categories.

Failure to comply could result in significant financial penalties and reputational damage.

Organisations should therefore begin implementing AI governance and compliance strategies, including:

  • AI risk assessments
  • Internal compliance policies
  • Documentation of AI systems
  • Human oversight mechanisms

Understanding which AI practices are prohibited is the first step toward building responsible and legally compliant AI systems.

Learn More About the EU AI Act

The EU AI Act is a complex and rapidly evolving area of law. While this article outlines the key prohibited practices, businesses and legal professionals often need a deeper understanding of how the regulation works in practice.

My book EU AI Act Explained provides a clear and practical guide to the legislation, including:

  • Detailed explanations of the AI Act’s provisions
  • Analysis of AI systems
  • Compliance requirements for businesses developing and using AI systems
  • Real-world examples of how the law will apply

If you want to fully understand the EU AI Act and prepare your organisation for the new regulatory landscape, you can learn more about the book below.

About the author

Olga Markova is a solicitor (England & Wales) and the author of EU AI Act Explained, a practitioner-focused commentary on the EU Artificial Intelligence Act and its implementing framework. Earlier in her career she worked in private practice with leading international law firms and in the telecommunications sector, focusing on technology and regulatory matters.

Connect with the author: LinkedInX

EU AI Act Explained

A 700-page annotated legal commentary covering the EU AI Act and its implementing framework as of 1 March 2026.

View book details This email address is being protected from spambots. You need JavaScript enabled to view it.
EU AI Act Explained book cover

Prefer Structured Learning?

If you would like a more guided route through the subject, the EU AI Act Essentials for Businesses Course complements the book and articles with a practical, business-focused introduction to how the EU AI Act may apply across business activities.