Quick Summary
The EU AI Act treats deployers as users with decision-making authority over AI systems. Classification depends on whether the software is truly an AI system, not merely automation. Responsibility arises from control over deployment and operational use. Personal, non-professional use falls outside regulatory scope.Legal Position
The EU AI Act defines “deployer” as a natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
So a deployer is a user. But to understand which users are deployers, it is necessary to work through three key elements of the definition.
First, the deployer must be using an AI system. The EU AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, which may exhibit adaptiveness after deployment and, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
It follows that not all software is an AI system, and so not all users of software are deployers under the EU AI Act. Some machine-based systems leave no doubt that they are AI systems, for example ChatGPT and Microsoft Copilot. Other output-generating software is less clear. For example, is Excel AI?
There is no exhaustive list of AI systems or software that is not an AI system. However, the European Commission’s Guidelines on the definition of an AI system offer a methodology to evaluate software.
Second, the AI system must be used under the authority of that user. Authority over an AI system means responsibility for the decision to deploy the system and for the manner of its actual use.
Third, personal non-professional use is excluded. A user of an AI system is not a deployer if the user is an individual and uses the AI system only for personal non-professional activity.
The EU AI Act and its implementing regulations to date do not define personal non-professional activity. It should therefore be understood literally, as the opposite of business or professional use.
Practical Steps
Map your use of software and AI tools. Create an internal inventory of all systems used across the business. Do not assume all software is an AI system — identify which tools may fall within the EU AI Act definition.
Assess whether each system qualifies as an AI system. Apply the EU AI Act definition and, where necessary, the European Commission Guidelines methodology. Pay particular attention to systems that generate predictions, recommendations or decisions.
Determine who exercises authority over each system. Identify the person or entity responsible for deciding to deploy the system and how it is used in practice. This will typically determine who is the deployer.
Allocate responsibility internally. Ensure there is clear ownership of each AI system within the organisation, for example business unit, compliance or IT. Avoid situations where responsibility is unclear or fragmented.
Exclude purely personal use scenarios. Confirm that any use of AI by individuals is not purely personal. If systems are used in a business context — even informally — they may still fall within scope.
Review third-party and SaaS arrangements. Where AI systems are provided by vendors, such as copilots or analytics tools, assess whether your organisation is still the deployer due to how the system is used under your authority.
Document your classification decisions. Keep a written record explaining why a system is, or is not, considered an AI system and why your organisation is, or is not, the deployer. This will be important for audit and regulatory scrutiny.
Train staff on deployer responsibilities. Ensure relevant teams understand that using AI in a business context may trigger obligations under the EU AI Act.
Integrate into compliance workflows. Link deployer identification into broader AI governance processes, including risk classification, transparency obligations and internal approvals.
Reassess regularly. As software evolves, for example by adding AI features, re-evaluate whether tools that were previously out of scope now qualify as AI systems.