The EU AI Act, a groundbreaking piece of legislation to govern artificial intelligence, has been agreed upon by European Union policymakers, setting a precedent for the most comprehensive framework to date for overseeing this transformative technology.
EU AI Act discussions took 38 hours
This consensus on the EU AI Act emerged following extensive discussions, spanning nearly 38 hours, between legislators and policymakers.
“The AI Act is a global first. A unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered. I welcome today’s political agreement,” stated EU chief Ursula von der Leyen.
Since the introduction of OpenAI’s ChatGPT last year, which significantly raised public awareness of the swiftly evolving AI sector, there has been a notable acceleration in the efforts to pass the AI Act. Initially proposed by the EU’s executive branch in 2021, the EU AI Act is now widely regarded as a model for governments worldwide. It aims to harness AI’s potential benefits while mitigating various risks, such as misinformation, job displacement, and copyright infringement.
A preliminary political agreement on the Artificial Intelligence Act was reached after negotiators from the European Parliament and the EU’s 27 member states resolved substantial disagreements on contentious issues, including generative AI and law enforcement’s use of facial recognition technology.
European Commissioner Thierry Breton celebrated the breakthrough with a “Deal!” tweet, marking the finalization of a political agreement on the Artificial Intelligence Act.
— Thierry Breton (@ThierryBreton) December 8, 2023
This sentiment was echoed by the parliamentary committee spearheading the EU Parliament’s negotiation efforts, announcing the accord on the EU AI Act. Initially slowed by debates over regulating language models that utilize online data and AI’s application in police and intelligence operations, the legislation is now poised for approval by EU member states and the Parliament.
The law will mandate that technology companies operating within the EU disclose the data underpinning their AI systems and conduct rigorous testing, particularly for high-stakes uses like autonomous vehicles and healthcare applications. It prohibits the blanket collection of images from the internet or security cameras for facial recognition databases, though it allows for “real-time” facial recognition by law enforcement in tackling terrorism and severe criminal activity.
Technology companies that fail to comply with the EU AI Act will be subject to stringent financial penalties, facing fines up to seven percent of their global revenue. The severity of the fines will be contingent on the nature of the violation and the size of the company. This EU legislation stands out as the most thorough attempt yet to establish regulatory oversight over AI, amidst a growing assortment of guidelines and regulations around the world.
Internationally, other nations are advancing in their own directions. In the United States, President Joe Biden issued an executive order last October, concentrating on AI’s influence on national security and issues of discrimination. Meanwhile, China has introduced regulations mandating that AI technologies align with “socialist core values”. In contrast, countries like the UK and Japan have opted for a more relaxed, less interventionist stance towards AI regulation.
The race to regulate AI
The EU initially took the forefront in the global effort to establish AI regulations, unveiling its initial draft in 2021. However, the surge in generative AI’s popularity necessitated swift updates to the proposal, which is seen as a potential global standard. Generative AI systems, such as OpenAI’s ChatGPT, have captivated the global audience with their capacity to generate human-like text, images, and music. However, they have also sparked concerns about their impact on employment, privacy, copyright protection, and even human safety.
As a response, countries like the USA, UK, China, and international groups including the G7, have started introducing their regulatory frameworks for AI, albeit still trailing behind Europe’s advancements. The final form of the EU AI Act awaits endorsement from the EU’s 705 politicians before the upcoming EU-wide elections. This step is anticipated to be a procedural matter.
The AI Act’s initial design aimed to address risks associated with various AI functionalities, categorizing them from low to unacceptable risk. However, the scope was broadened to include foundational models like OpenAI’s ChatGPT and Google’s Bard chatbot. These foundational models, crucial for general-purpose AI services, were a major point of contention in Europe. Despite resistance, notably from France advocating for self-regulation to boost European generative AI firms against major U.S. competitors like Microsoft-backed OpenAI, a provisional compromise was reached early in the negotiations.
Known as large language models, these systems are trained on extensive datasets of text and images from the internet. Unlike traditional AI, which processes data and performs tasks based on pre-set rules, generative AI can create novel content, marking a significant evolution in AI capabilities.
Key changes awating
The EU AI Act is set to bring about significant changes in the story of regulating artificial intelligence, particularly within the European Union. Key changes and impacts include:
- Establishes the most comprehensive framework for AI oversight in the EU, influencing global standards.
- Requires technology companies in the EU to disclose AI training data and conduct rigorous testing.
- Targets critical AI applications in areas like autonomous vehicles and healthcare.
- Prohibits indiscriminate scraping of images for facial recognition databases, with limited exceptions for law enforcement.
- Introduces stringent financial penalties for non-compliance, with fines up to seven percent of global revenue.
- Positions Europe as a leader in global AI regulation.
- Balances AI benefits with risks like misinformation, job displacement, and copyright infringement.
- Expands scope to include foundational models like ChatGPT and Google’s Bard, addressing challenges in generative AI.
- Sets regulatory benchmarks for managing AI’s impact on employment, privacy, and safety.
Featured image credit: Kerem Gülen/DALL-E 3