LAWCIS Legal Intelligence.
AI Compliance. Real Impact.
Lawcis Research and Consultancy Limited
Lawcis
EU AI Act Commentary
EU AI Act Commentary

When Do You Need to Disclose AI Use? A Busy Reader’s Guide to the EU AI Act’s Transparency Requirements

If your business uses chatbots, AI-generated images, synthetic voice, automated assistants, or other forms of generative AI, one practical question is becoming hard to avoid: when do you need to tell people that AI is involved?

This article forms part of Lawcis commentary on the EU AI Act. For the full annotated reference work, see EU AI Act Explained.

That is the core issue behind the transparency requirements in Article 50 of the EU AI Act. The law is not only about banned AI practices and high-risk systems. It also contains rules designed to make sure people know when they are interacting with AI or when they are looking at AI-generated or AI-manipulated content.

For busy readers, the big point is: if AI could mislead someone about who or what they are dealing with, transparency starts to matter.

What are the EU AI Act’s transparency requirements?

Article 50 covers transparency obligations for certain AI systems where people may not realise that they are interacting with AI, or may be exposed to content that looks real but has actually been generated or manipulated by AI. Official EU guidance explains that Article 50 covers four main categories of AI systems and uses.

  • Providers of AI systems that interact with people must inform them that they are interacting with an AI system, unless this is obvious.
  • Providers of AI-generated or AI-manipulated content must facilitate identification and mark that content in a machine-readable manner.
  • Deployers of emotion recognition or biometric categorisation systems must inform exposed individuals.
  • Deployers of AI systems generating or manipulating deepfake content, or certain AI-generated or manipulated text intended to inform the public on matters of public interest, must inform users about the artificial origin of the content, subject to defined exceptions.

These obligations sit alongside other parts of the AI Act and are designed to reduce deception and manipulation while supporting trust in the information environment.

When do you need to disclose AI use?

The clearest working answer is this: you need to disclose AI use when a person might otherwise not realise they are dealing with AI, or might be misled by AI-generated or manipulated content.

That principle shows up in a few common business scenarios.

1. When a person is interacting with a chatbot or AI assistant

If a natural person is interacting with an AI system, that person generally needs to be informed unless the AI use is already obvious from the circumstances. This can matter for website chat tools, AI customer service assistants, intake bots, onboarding tools, HR support tools, and voice agents.

The practical lesson is straightforward. If a user is speaking to an automated system, it is usually safer to make that clear up front. A hidden note in terms and conditions is unlikely to be the strongest approach. Clear notice at the start of the interaction is much more consistent with the purpose of the rule.

2. When you publish AI-generated or AI-manipulated content

The Commission’s transparency work makes clear that Article 50 is also about AI-generated and AI-manipulated content, especially where detection, marking, and labelling become important. This is relevant for marketing teams, training providers, publishers, agencies, platforms, and businesses using synthetic media in outward-facing content.

Not every piece of content touched by AI will automatically require the same treatment. But the closer content gets to appearing authentic or potentially misleading, the stronger the case for disclosure becomes.

3. When content could be mistaken for something real

Deepfakes are the obvious example, but the compliance question is broader than viral fake videos. The key issue is whether a reasonable person could believe the content is real, authentic, or human-created when it is not.

That question can arise in promotional videos, product demos, educational materials, public affairs content, recruitment campaigns, and corporate communications. This is not just a problem for bad actors. It is also a governance issue for responsible businesses using ordinary generative AI tools.

Why this matters for ordinary businesses

Many organisations still hear “EU AI Act” and think mainly about high-risk systems, big AI models, or remote biometric surveillance. But Article 50 is different. It is likely to matter because it captures ordinary uses of AI that many businesses are already experimenting with.

A law firm using an AI intake assistant, a consultancy publishing AI-generated explainer graphics, a training business using an AI narrator, or a marketing team producing synthetic video content may all need to think about transparency. The compliance burden is not confined to technology companies. It also reaches deployers choosing how AI is used in practice.

A practical test: ask these four questions

For a busy organisation, it helps to translate the law into a quick internal test.

Are people interacting with AI without knowing it?

If yes, disclosure is likely to matter.

Could content be mistaken for authentic human or real-world material?

If yes, labelling or other transparency steps may be needed.

Is the content public-facing or professionally deployed?

That increases the importance of getting the transparency analysis right.

Would you feel comfortable if a regulator, journalist, or client asked why the AI involvement was not made clear?

If the answer is no, your transparency design probably needs work.

What should businesses do now?

A sensible starting point is not to wait for a complaint or a problem. Instead, build a basic transparency process now.

Map your AI use cases

Identify where AI is used across the business, especially in customer-facing, public-facing, and communications contexts.

Separate low-risk assistance from outward-facing AI

Spellcheck or internal drafting help is one thing. An AI system interacting with users or producing realistic outward-facing content is another.

Create approval rules

Teams should know when AI-generated material needs sign-off, what labels may be needed, and who makes the decision.

Review vendor promises

If you rely on third-party tools, understand what they do for marking, detectability, and disclosure support.

Train staff

Transparency cannot be handled by legal alone. Marketing, communications, HR, sales, and operations all need enough AI literacy to spot when disclosure may be required.

A word on timing

The current official EU guidance says the transparency obligations in Article 50 apply from 2 August 2026, and the Commission is developing guidelines and a Code of Practice on marking and labelling AI-generated content to support compliance. Even so, the better business message is not to fixate on a single date, but to start understanding the rules now.

Why trust matters as much as compliance

Even where the legal boundaries are still being clarified, the direction of travel is obvious. European policymakers want people to know when they are dealing with AI and when content has been artificially generated or manipulated. That is not just a compliance issue. It is also a trust issue.

If customers, readers, clients, or users feel they were misled about AI involvement, the reputational cost may arrive long before any enforcement action. For many organisations, a clear disclosure approach will therefore be good governance as well as good compliance.

Quick takeaway

So, when do you need to disclose AI use?

You should be thinking about disclosure whenever AI interacts directly with people, whenever AI-generated or AI-manipulated content could be mistaken for something real, and whenever the absence of transparency could mislead an audience. That is the heart of the EU AI Act’s transparency requirements.

For busy businesses, the best next step is to identify where AI appears in your workflows, decide where transparency matters most, and build a simple process for handling disclosure, labelling, and review.

About the author

Olga Markova is a solicitor (England & Wales) and the author of EU AI Act Explained, a practitioner-focused commentary on the EU Artificial Intelligence Act and its implementing framework. Earlier in her career she worked in private practice with leading international law firms and in the telecommunications sector, focusing on technology and regulatory matters.

Connect with the author: LinkedInX

EU AI Act Explained

A 700-page annotated legal commentary covering the EU AI Act and its implementing framework as of 1 March 2026.

View book details This email address is being protected from spambots. You need JavaScript enabled to view it.
EU AI Act Explained book cover