The European Union (EU) officially commenced the enforcement of its pioneering Artificial Intelligence (AI) Act on Sunday, marking the first phase of comprehensive regulations aimed at controlling the use of AI technologies across member states.
This landmark legislation is the first of its kind globally and seeks to establish a framework for AI governance that balances innovation with public safety and democratic values.
The AI Act initially came into force in August 2024. As of this week, companies must comply with bans on certain AI systems and meet technology literacy requirements for staff. Non-compliance could result in significant penalties, including fines of up to 35 million euros ($35.8 million) or 7% of a company’s global annual revenue — whichever is higher. These penalties exceed those outlined under the EU’s General Data Protection Regulation (GDPR).
The Act prohibits AI applications deemed to pose an “unacceptable risk” to citizens. These include social scoring systems, real-time facial recognition in public spaces, biometric identification for categorizing individuals by sensitive attributes, predictive policing, and other forms of AI manipulation.
Despite the initial enforcement phase, the AI Act will be fully implemented gradually over the next 18 months. Tasos Stampelos, head of EU public policy at Mozilla, acknowledged the law’s complexity, stating that while “not perfect,” it is “very much needed” to ensure product safety and ethical AI usage.
Experts believe that compliance will hinge on forthcoming guidelines, technical standards, and secondary legislation defining how companies should adhere to the rules.
Technology executives and investors have expressed concerns about the regulatory burden the AI Act may place on innovation. Critics argue that stringent rules could stifle technological advancements and place European firms at a competitive disadvantage compared to counterparts in the US and China.
Prince Constantijn of the Netherlands voiced apprehension about Europe’s focus on regulation, stating:
“Our ambition seems to be limited to being good regulators… It’s very hard to do that in such a fast-moving space.”
Others see the Act as an opportunity for Europe to set a global benchmark for trustworthy AI. Diyan Bogdanov, Director of Engineering Intelligence at Payhawk, noted that the requirements for bias detection, risk assessments, and human oversight “aren’t limiting innovation — they’re defining what good looks like.”
One of the most contentious aspects of the AI Act is its exemptions for law enforcement and migration authorities. Despite bans on real-time facial recognition and emotional detection in public spaces, exceptions remain for national security and serious crime prevention.
Activists warn that these loopholes could undermine the intended protections of the Act. Caterina Rodelli of Access Now highlighted concerns that exemptions might allow AI lie detectors at borders and real-time surveillance by law enforcement agencies.
As the EU continues to roll out the AI Act, it faces the challenge of enforcing the regulations uniformly across member states. Governments have until August to appoint authorities responsible for enforcement.
Despite criticism and concerns over loopholes, proponents argue that the AI Act sets a vital precedent for safeguarding democratic values in an era of rapid technological advancement. Henna Virkkunen, the EU’s tech policy chief, hailed the legislation as both a “protector for citizens” and an “enabler for innovation.”