blog

The EU AI Act and Localisation: The New Rules and Why They Matter for Buyers

November 7, 2025

The EU AI Act and Localisation: The New Rules and Why They Matter for Buyers

In the past few years, advanced AI tools have rapidly expanded their role in localisation, and the EU AI Act is now reshaping how buyers use them.


Once limited to machine translation and neural MT, AI now includes large language models, automated transcription, synthetic voiceover, and other systems that touch almost every type of content. With the EU’s AI Act now in force, these platforms and tools are subject to the first rules for how they are built, deployed, and reviewed. The law’s phased rollout will bring deadlines that affect what tools you use, how workflows are structured, and how multilingual content is delivered.


The most relevant provisions for language services buyers focus on how AI is applied in specific services and the safeguards that surround its output. Some measures address the technology itself, while others require changes to the review and quality assurance processes that happen before content reaches your customers. To better understand what the EU AI Act means for localisation buyers, it’s important to know which rules apply, when they take effect, and how to plan projects as enforcement begins.

A Closer Look at the EU AI Act

The EU AI Act (Regulation (EU) 2024/1689) is the first legal framework for artificial intelligence, and it has direct implications for localisation buyers. It applies to any company that develops, sells, or uses AI systems in the European Union, which includes AI tools used in localisation. It also applies beyond the EU’s borders. Providers based outside the EU must comply if their AI systems or outputs are offered within the EU market or used by people in the EU.


The regulation took effect in August 2024 with a straightforward purpose: make sure AI is safe to use, clear in how it works, and respectful of people’s rights. The Act sorts AI into risk levels, from systems that are banned outright to those considered minimal risk. Each level comes with its own set of rules, and enforcement will be handled by national authorities in every EU member state.

Planning for EU AI Act Deadlines

The AI Act is being enforced in stages, and the first obligations for localisation-related AI are already in effect. Others will follow over the next two years.


2 February 2025: The ban on “unacceptable-risk” AI uses took effect. AI literacy requirements also began, meaning providers and deployers must ensure their staff are trained to use AI responsibly.
2 August 2025: Governance rules for general-purpose AI (GPAI) models like ChatGPT came into force. Models classified as systemic risk face extra obligations, including stricter transparency and oversight. In response, vendors have already begun changing tools and workflows to comply.
2 August 2026: High-risk AI systems must meet stricter oversight, documentation, and transparency obligations. Any AI used for regulated or sensitive content will require more human review and record keeping.
2 August 2027: The transition period for general-purpose AI ends. By this point, all GPAI models already on the market must comply with the Act, building on the obligations that began in 2025.


Each enforcement date is a trigger point for localisation companies to review and adjust tools, contracts, or processes. Tracking them is central to AI compliance for language service buyers, ensuring agreements and workflows are aligned before deadlines hit. That can mean revising contract terms ahead of one deadline, adding audit log requirements at another, or piloting new review steps before they become mandatory.

Understanding Risk Categories Under the EU AI Act

The AI Act sorts systems into four categories based on inherent risk, each with its own obligations. Understanding the categories is important because it drives how much oversight, documentation, and disclosure a vendor must provide before delivery.


Unacceptable risk: This category includes AI uses that are banned outright, such as systems that manipulate behaviour in harmful ways or pose a safety threat. These are unlikely to appear in localisation workflows, but are prohibited regardless of industry.


High risk: High risk regulates when AI is used for regulated sectors or for content with legal, medical, or safety consequences. Examples include translating clinical trial documentation or court submissions. These systems require qualified human review, detailed records, and compliance checks before release.


Limited risk: Limited is a lower-risk category that requires disclosure when AI-generated text, audio, or video is used. This can include synthetic voiceovers for e-learning or automatically generated captions, which must be labelled as AI-produced.


Minimal risk: This lowest category encompasses most other AI uses, such as drafting internal training content, which still must comply with general laws like GDPR.

Britain EU Brexit Referendum Concept

The Rules That Matter Most for Localisation Buyers

Once you know the risk level of a system under the EU AI Act, the next step is for localisation buyers to understand the obligations that come with it.


Oversight and accountability sit at the heart of EU AI Act localisation compliance, especially when AI is used for high-risk workflows. These principles define what responsible AI in localisation looks like in practice: qualified people reviewing AI output before it reaches your audience, clear review criteria set in advance, and detailed records of what was checked and by whom. For regulated or sensitive projects, these logs must be available on request, and they may need to be provided in an audit as evidence of compliance.


AI transparency requirements apply to more than just banned or high-risk systems. If captions, transcripts, or voiceovers are AI-generated, the law requires that to be clear to the end user. This can also extend to mixed workflows, for example, captions created by AI but edited by a human still need to be clearly labelled.


These requirements also tie directly into accessibility. Clear labelling and structured formats determine whether content is usable for all audiences and meets procurement standards.


Data governance and escalation obligations cover how training data, client content, and personal information are handled. Buyers should know whether their vendor’s tools send content to external services, how long that content is retained, and whether training on customer data is disabled. For projects covering sensitive and regulated domains, a clear escalation path for switching to human-only workflows or bringing in specialist reviewers should be documented.


Data protection under GDPR remains a parallel requirement. The AI Act does not replace existing privacy law, but it reinforces it. That means localisation buyers should confirm that vendors disable training on customer data, limit retention, and keep content within approved environments. Privacy and AI compliance are now two sides of the same safeguard for language service buyers.

Turning the EU AI Act into Action

The first enforcement stages already show how the EU AI Act affects localisation, with vendors swapping AI tools, revising disclosure formats, and introducing more structured review logs. Localisation buyers who stay engaged with these changes can influence how EU AI Act compliance measures are applied, rather than adjusting after decisions are made.


Embedding these measures into standard practice is part of responsible AI in localisation, such as requiring clear labels for AI output, records of what was reviewed, and defined steps for escalating sensitive projects. Acting early keeps quality under control and budgets aligned with value. Early adopters benefit too, since vendors who act ahead of the deadlines offer more predictable timelines and smoother projects.


Our next article will turn from rules to practice, showing how localisation buyers can carry these requirements into supplier contracts and vendor selection criteria, choosing partners who can prove compliance while building data protection and oversight in from the start.

If you have questions about the EU AI Act or want to learn how to combine AI efficiency with expert oversight for safe, scalable multilingual content, book a consultation with us.

Intelligent localisation.
Global engagement.

Solutions
  • Document translation

  • E-learning localisation

  • Multimedia & Audiovisual

  • Medical Translation

  • Creative Translation

AI technology
  • AI translation

  • AI interpreting

  • Human & Synthetic voices

  • AI Transcription

  • AI subtitling

Other links
  • Industries

  • About us

  • Leadership

  • Press releases

  • Careers

  • ESG

  • Contact us

Legal
  • Privacy Policy

  • Cookie Policy

  • Terms & Conditions

Newsletter signup
Newsletter signup

Copyright © The Translation People Limited 2026. All Rights Reserved.
The Translation People Limited. Registered in England and Wales No: 06329037
Registered address: America House, Rumford Court, Rumford Place, Liverpool L3 9DD.
‘The Translation People’ & ‘Intelligent localisation. Global engagement.’ are registered trademarks of The Translation People Limited.

Copyright © The Translation People Limited 2026. All Rights Reserved.

The Translation People Limited. Registered in England and Wales No: 06329037

Registered address: America House, Rumford Court, Rumford Place, Liverpool L3 9DD.

‘The Translation People’ & ‘Intelligent localisation. Global engagement.’ are registered trademarks of The Translation People Limited.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.