׻

Skip to main content

The EU AI Act: A new era for artificial intelligence regulation in Europe begins

Estimated reading time: 5 minutes

AI is here, and it’s already changing societies. Can regulators help the sector to flourish – and also keep people safe? The EU thinks so. Let’s dive into the details of its AI Act…

February 2025 was a huge month for Europe’s artificial intelligence (AI) sector. World leaders gathered at the AI Action Summit in Paris and the European Commission announced a to make Europe a leading force in AI.

But there’s more to AI than commerce. There is also safety. To protect its citizens, the European Union drafted the . It creates a uniform legal regime for all EU member states. It is the world’s first major regulatory framework for AI

The EU believes the act can set the standard for AI regulation, and minimise the danger to citizens. In its launch , it said: “By developing a strong regulatory framework based on human rights and fundamental values, the EU can develop an AI ecosystem that benefits everyone.”

How the EU defines AI

In order to create guidelines around AI, it’s important to define exactly what AI is. This has proved difficult – the EU has revised its definition many times. However, in February 2025, it published the following new wording in .

AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

A short history of the EU Artificial Intelligence Act

The EU Artificial Intelligence Act was first proposed in 2021, and approved in August 2024. Here’s a timeline of key dates.

April 2021
The EC publishes a proposal to regulate AI in the EU.

December 2023
A provisional agreement is reached on the AI act.

August 2024
The AI Act is published in the Official Journal of the European Union. It enters into force.

February 2025
Prohibitions on AI systems and requirements on AI literacy start to apply.

August 2026
The remainder of the AI Act will start to apply.

The AI Act’s risk-based approach

All regulation must strike a balance between keeping people safe and promoting innovation. This is why the . In short, the higher the risk, the stricter the rules. Regulators expect most AI use cases to be low risk, and therefore exempt from scrutiny. But at the other extreme, it will ban programs that infringe fundamental rights and democratic values.

The Act categorises AI systems into four levels as follows:

•&Բ;&Բ;  Low and minimal risk

These systems pose minimal risks to fundamental rights or safety. In most cases, they will require only self-declaration. They include: 

•&Բ;&Բ;  Spam filters
•&Բ;&Բ;  Product recommendation systems
•&Բ;&Բ;  Chatbots
•&Բ;&Բ;  Simple image or video filters
•&Բ;&Բ;  Simple games 
•&Բ;&Բ;  Spam detection algorithms
•&Բ;&Բ;  Educational tools with limited personalisation
•&Բ;&Բ;  Image enhancement tools

•&Բ;&Բ;  High risk

These systems require stringent compliance measures. The describe AI used in: 

•&Բ;&Բ;  Critical infrastructures that could put life at risk
•&Բ;&Բ;  Training that could determine the access to education 
•&Բ;&Բ;  Product safety components 
•&Բ;&Բ;  Management of workers
•&Բ;&Բ;  Essential private and public services (e.g. credit scoring)
•&Բ;&Բ;  Law enforcement
•&Բ;&Բ;  Migration, asylum and border control management
•&Բ;&Բ;  Justice and democratic processes

•&Բ;&Բ;  Unacceptable risk

These AI systems are considered a threat to fundamental rights, safety, or livelihoods. They are banned within the EU – though with caveats. Unacceptable risk uses cases include:

•&Բ;&Բ;  Systems that assign social credit scores based on behaviour or political affiliations
•&Բ;&Բ;  Real-Time Remote Biometric Identification (RBI) for law enforcement in public spaces
•&Բ;&Բ;  Deploying subliminal or deceptive techniques to distort behaviour and impair decision-making
•&Բ;&Բ;  Exploiting vulnerable communities
•&Բ;&Բ;  Using biometrics to infer sensitive attributes (race, political opinions etc)
•&Բ;&Բ;  Criminal profiling based on personality traits
•&Բ;&Բ;  Inferring emotions in workplaces or educational institutions

A special rule for AI ‘models’

AI is not just about use cases. It’s also about the data models that train them – which the Act calls . EU regulators propose a risk-based approach here too. 

For commercial GPAIs, this means transparency. But there are even stricter provisions for GPAIs with “systemic risk”. In other words, GPAI models that could threaten human life or replicate themselves away from human oversight. 

At present, the Act considers GPAI models to be a systemic risk when the cumulative amount of compute used for training exceeds 1025 FLOPS (Floating Point Operations Per Second).

The EU AI Act: scope and penalties

Any enterprise active in AI must abide by the new rules. And since the Act is extraterritorial, it applies to all entities, not just those based within the EU. This is significant, given that most market leaders in AI – OpenAI, Google, Nvidia, DeepSeek etc – are based overseas.

The EU will punish non-compliance with severe fines as follows:

•&Բ;&Բ;  Launching prohibited AI systems. Fines can reach 7% of worldwide annual turnover or €35 million (whichever is higher). 
•&Բ;&Բ;  High-risk AI and GPAI breaches. 3% of worldwide annual turnover or €15 million. 
•&Բ;&Բ;  Providing incorrect information. 1% of worldwide annual turnover or €7.5 million.

How should companies prepare for the AI Act?

The Act is clear: unacceptable AI applications are already prohibited, and high-risk AI applications must achieve compliance by August 2026. Companies need to act now to stay within the law.

Experts advise the following basic steps:

•&Բ;&Բ;  Create a needs analysis. How is AI is currently used in the organisation? How qualified are staff? What training is needed? The EU has built a web site to help firms with these questions.

•&Բ;&Բ;  Make a training plan. The Act says firms need "AI competence" to use AI systems safely and effectively. Training should reflect this.

•&Բ;&Բ;  Appoint an AI specialist. This is not mandatory, but it may be advisable to have someone with detailed knowledge of AI systems and some legal expertise.

•&Բ;&Բ;  Get familiar with AI standards. To prepare for the new rules, organisations should know about the technical standard. Companies can use it to create AI management systems that align with new regulatory requirements. Also, firms should be aware of technical standards that apply to specific verticals such as for aeronautics.

The EU AI Act may be the most significant and detailed AI regulation in the world. It reflects Europe’s attempt to stimulate an important new industry while protecting public safety. This is a challenging balance. Stakeholders must pay close attention to the Act to take advantage of its provisions and avoid breaches.