Products You May Like
The world’s first comprehensive artificial intelligence regulations will go into effect next month. The European Council—the European Union’s highest-ranking political body—gave its final approval to the AI Act, a law designed to limit the influence of “high-risk” AI applications while banning those with “unacceptable risk” outright.
Beginning this June, developers will be responsible for complying with regulations specific to the type of AI application with which they work. The AI Act establishes four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. Depending on the category into which an AI system falls, its developer could be required to establish a risk management system, conduct extensive data governance, provide downstream developers with usage instructions, and share technical documentation with the European Commission’s AI office.
Those who develop and deploy applications with “unacceptable risk” will be forced to withdraw them while facing financial and legal penalties. These systems include social scoring, biometric categorization (except when law enforcement uses it under specific circumstances), facial recognition scraping, and any application that attempts to manipulate people’s behavior.
Credit: George Prentzas/Unsplash
The AI Act has undergone over three years of research, discussion, and amendments since its conception in April 2021. Now, it will regulate the use and dissemination of AI systems designed or used in the EU’s 27 countries. Organizations from IBM to the Ada Lovelace Institute have thrown their support behind the AI Act, mainly due to the law’s focus on “risk and accountability, rather than algorithms.” Though some experts say the AI Act isn’t perfect—Amnesty International claims the law prioritizes law enforcement agencies and industry over human rights—its regulations are otherwise seen as a step toward preventing “some of the most dystopian AI scenarios.” Some hope the guardrail’s verbiage and scope will be adjusted over time to account for new risks as they arise.
“This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies,” Mathieu Michel, Belgium’s minister of digitization, said. “With the AI Act, Europe emphasizes the importance of trust, transparency, and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.”
The EU’s landmark law is expected to offer a blueprint for countries looking to implement their own AI regulations. While the Biden administration introduced an “AI Bill of Rights” in 2022, the document’s guidelines are not legally enforceable and, therefore, optional. Individual states are busy enacting their own AI rules and suggestions; Colorado cemented its risk-based AI law on Monday. In the meantime, though, it’s up to entities to police their own AI-related practices.