Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

The EU’s sweeping AI law is about to take full effect

A critical August 2 deadline is approaching for powerful "systemic risk" AI models like GPT-4 and Gemini 2.5 Pro to comply with the new rules

byAytun Çelebi
July 21, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence

The European Union implemented its AI Act last year, releasing guidelines to ensure compliance and balance AI innovation with safety, culminating in the July 18 launch of the AI Act Explorer, a comprehensive guide for companies navigating these regulations.

The AI Act, established to introduce safeguards for advanced artificial intelligence models while simultaneously cultivating a competitive and innovative ecosystem for AI enterprises, delineates distinct risk classifications for various models. Henna Virkkunen, EU Commission Executive Vice President for Technological Sovereignty, Security and Democracy, stated to Reuters that the guidelines issued by the Commission support the smooth and effective application of the AI Act.

Under the regulatory framework of EU law, artificial intelligence models are categorized into one of four distinct risk levels: unacceptable risk, high risk, limited risk, and minimal risk. AI classified within the unacceptable risk category faces a prohibition within the EU. This classification specifically encompasses applications such as facial recognition systems and social scoring mechanisms. Other categories are determined by the computational capacity of the AI or its designated functionalities.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The European Union defines artificial intelligence models presenting systemic risks as those developed using “greater than 1025 floating point operations (FLOPs).” Notable AI models currently falling under this classification include OpenAI’s GPT-4, OpenAI’s o3, Google Gemini 2.5 Pro, Anthropic’s more recent Claude models, and xAI’s Grok-3. The release of the AI Act Explorer guidance precedes the August 2 deadline by approximately two weeks. This deadline mandates that general-purpose AI models and those identified as posing systemic risks must achieve compliance with the Act’s provisions.


What will EU AI Act change for real?


Manufacturers of AI models identified as posing systemic risks are subject to specific obligations. These include conducting comprehensive model evaluations to identify potential systemic risks and documenting the adversarial testing performed during the mitigation of such risks. Additionally, these manufacturers are required to report serious incidents to both EU and national offices if such occurrences materialize. They must also implement appropriate cybersecurity measures to safeguard against misuse or compromise of their AI systems. The Act comprehensively places responsibility on AI companies to proactively identify and prevent potential systemic risks at their origin.

The AI Act Explorer has been designed to furnish artificial intelligence developers with explicit guidelines concerning the applicability of the Act’s various provisions to their operations. Companies are also provided with access to the EU’s accompanying compliance checker, a tool enabling them to ascertain their precise obligations under the Act. Non-compliance with the Act’s stipulations can result in substantial financial penalties. Fines range from €7.5 million, equivalent to $8.7 million, or 1.5% of a company’s global turnover, up to a maximum of €35 million, or 7% of global turnover, with the specific amount contingent upon the severity of the violation.

Critics of the AI Act have characterized its regulations as inconsistent and have asserted that it inhibits innovation. On July 18, Joel Kaplan, Meta’s Chief of Global Affairs, declared that the company would not endorse the EU’s Code of Practice for general-purpose AI models, which is a voluntary framework aligned with the AI Act. Kaplan stated on LinkedIn that this Code introduces a number of legal uncertainties for model developers, alongside measures that extend significantly beyond the scope of the AI Act. Earlier in July, chief executive officers from companies including Mistral AI, SAP, and Siemens, among others, issued a joint statement requesting the EU to pause the implementation of the regulations.

Proponents of the Act maintain that it will serve to restrain companies from prioritizing profit at the expense of consumer privacy and safety. Mistral and OpenAI have both committed to signing the Code of Practice, a voluntary mechanism that enables companies to demonstrate their alignment with the binding regulations. OpenAI recently launched ChatGPT agent, which possesses the capability to utilize a virtual computer for executing multi-step tasks, including contacting individuals at small businesses.


Featured image credit

Tags: AI ActeuFeatured

Related Posts

Texas Attorney General files lawsuit over the PowerSchool data breach

Texas Attorney General files lawsuit over the PowerSchool data breach

September 5, 2025
iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

September 5, 2025
AI chatbots spread false info in 1 of 3 responses

AI chatbots spread false info in 1 of 3 responses

September 5, 2025
OpenAI to mass produce custom AI chip with Broadcom in 2025

OpenAI to mass produce custom AI chip with Broadcom in 2025

September 5, 2025
When two Mark Zuckerbergs collide

When two Mark Zuckerbergs collide

September 5, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025

LATEST NEWS

Texas Attorney General files lawsuit over the PowerSchool data breach

iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

AI chatbots spread false info in 1 of 3 responses

OpenAI to mass produce custom AI chip with Broadcom in 2025

When two Mark Zuckerbergs collide

Deepmind finds RAG limit with fixed-size embeddings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.