European Parliament Ratifies AI Act: A Milestone In AI Regulation

Share

Key Takeaways:
– The EU Parliament has approved the AI Act, the first significant law regulating AI use.
– Four levels of AI applications will have varying degrees of restrictions and penalties.
– High-risk and extreme risk AI applications will require government approval or could face bans.
– The AI Act will officially become European law by May or June, with subsequent phases of enforcement.
– The response to the AI Act’s passage has been largely positive, marking a new era in AI development and use.

On Wednesday, the European Parliament made a significant move by approving the first major legal framework for controlling artificial intelligence usage – the pioneering AI Act. The bill, initially proposed in 2018, is designed to shield consumers from any adverse effects of AI. It accomplishes this by creating a unified regulatory and legal structure that governs how AI is developed, applied by companies, and the consequences of non-compliance.

**A Four-Tier System for AI**

The new piece of legislation introduces four distinct AI categories, each with its restrictions and penalties. Search engines, classified as low-risk AI applications, will operate without regulation. In contrast, slightly more risky applications like chatbots will face certain transparency demands.

On the higher end of the spectrum, AI applications with significant risk, such as self-driving cars, credit scoring, usage in law enforcement and safety components of products like robot-assisted surgery, will need government approval before they can be implemented. These high-risk systems will be subject to minimum safety standards set by the EU, and the government will maintain a database of all high-risk AI systems.

The law also addresses extreme-risk applications like social scoring systems, public-facing biometric systems, emotion recognition, and predictive policing. Such applications, deemed as carrying extreme risk, will be banned, though some exceptions might apply for law enforcement purposes.

**Enforcement Phases of the AI Act**

Generative AI applications will need to meet specific transparency standards before their usage. As for the powerful Generative Pretrained Transformer models that could present systemic risks, additional requirements will be in place, including performing model evaluations and risk assessments.

The AI Act is all set to become the law in Europe by May or June, pending formal approval from member countries. Once it comes into effect, there will be phased enforcement. For example, bans on extreme-risk AI applications are expected to be implemented six months after the passage of the Act. Nine months later, the codes of practice will be enforced, followed by the introduction of AI governance requirements. However, it will take up to 36 months until all the requirements for high-risk systems are fully implemented.

**Responses to the AI Act Passage**

The AI Act’s official passage was met with applause, including from Thierry Breton, the European Commissioner for the internal market. Breton showed his support through an enthusiastic statement that emphasized Europe’s role as a global standard-setter in AI.

Ashley Casovan, Managing Director at the International Association of Privacy Professionals’ AI Governance Center, hailed the legislation. She highlighted that it’s the beginning of a new era anchored on human-centric values. Likewise, Forrester Principal Analyst Enza Iannopollo stressed the Act’s importance, establishing the EU as the trendsetter for trustworthy and responsible AI.

Notably, international companies – including those in the US – will need to adapt to these changes and keep an eye on the unfolding legal landscape, particularly for high-risk systems. Danny Manimbo, an expert at IT compliance firm Schellman, compared the AI Act to the EU’s General Data Protection Regulation (GDPR), emphasizing early preparation’s role in readiness.

In closing, the ratification of the AI Act by the European Parliament is a critical step towards a new era of AI development and use. With its human-centric approach, it sets global standards and paves the way for safer, fairer, and responsible AI applications across the globe.

Jonathan Browne
Jonathan Brownehttps://livy.ai
Jonathan Browne is the CEO and Founder of Livy.AI

Read more

More News