The bloc bans “unacceptable” high‑risk AI practices and imposes steep fines while national and EU bodies oversee compliance by 2026.
On 2 Feb 2026, one year after imposing a total ban on “unacceptable risk” AI practices such as social scoring and emotion recognition, the European Union had begun actively enforcing its landmark Artificial Intelligence Act.
This marks a pivotal step in the bloc’s effort to regulate AI systems that pose significant risks to fundamental rights and public safety. The law, which entered into force in August 2024, is being rolled out in phases, with the first wave of prohibitions and obligations already in effect since early 2025 and broader enforcement ramping up through 2026.
Under the Act, eight categories of AI practices are now formally banned, including: systems that manipulate human behavior exploit vulnerabilities, or enable indiscriminate mass surveillance and social‑scoring regimes.
Firms found in breach of these prohibitions can face fines of up to €35 million or 7% of global annual turnover, whichever is higher — penalties that exceed those under the GDPR and underscore the EU’s hardline stance.
The enforcement architecture combines EU‑level oversight with national authorities: the European Commission’s newly established AI Office monitors general‑purpose AI models and can impose fines of up to 3% of turnover or €15 million for non‑compliance, while national market‑surveillance bodies police high‑risk AI systems on the ground. These authorities can demand corrective measures, restrict deployments, and escalate cross‑border cases to the Commission and other member states.
Most high‑risk AI systems, such as those used in recruitment, credit scoring, critical infrastructure, and law‑enforcement profiling, must meet strict requirements by 2 August 2026, including risk‑management frameworks, data‑quality checks, human oversight, and detailed technical documentation. At the same time, transparency rules for “limited‑risk” applications, such as labeled deepfakes and chatbots, will also come into force, aiming to give users clearer signals about when they are interacting with AI.
Business groups warn that the phased rollout still leaves many firms scrambling to map obligations, especially smaller firms that lack dedicated compliance teams. Nevertheless, EU officials describe the AI Act as a global benchmark, designed to shape how AI is developed and deployed worldwide while anchoring innovation in safety, accountability, and respect for fundamental rights.