Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI chief pushes for international body to oversee powerful AI

Sam Altman advocates for an international agency to monitor powerful AI

byEmre Çıtak
May 13, 2024
in Artificial Intelligence
Home News Artificial Intelligence

The rapid development of Artificial Intelligence (AI) has sparked a global conversation about its potential impact and the need for responsible development. A key aspect of this conversation centers on AI regulation, with experts grappling with how to ensure AI is used safely and ethically.

One prominent voice in this discussion is Sam Altman, the CEO of OpenAI, a research company focused on developing safe and beneficial artificial general intelligence.

In a recent podcast interview, Altman advocated for the establishment of an international agency to monitor and ensure the “reasonable safety” of powerful AI systems.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Why an international agency?

Altman’s proposal for an international agency stems from his belief that the most powerful AI systems will have the potential to cause significant global harm. He argues that the negative impacts of such advanced AI could transcend national borders, making it difficult for individual countries to effectively regulate them on their own.

Altman, speaking on the All-In podcast, expressed concern about the near future, stating, “there will come a time…where frontier AI systems are capable of causing significant global harm”.

Sam altman AI regulation agency 2024
The rapid development of AI within the first quarter of 2024 has sparked a global conversation about safe and ethical AI use (Image credit)

He envisions an international agency specifically focused on “looking at the most powerful systems and ensuring reasonable safety testing”.

However, Altman acknowledges the need for a balanced approach. He emphasizes the dangers of “regulatory overreach,” seeking a framework that avoids excessive restrictions while still mitigating risks. He highlights the potential pitfalls of both under-regulation and over-regulation.

Laws can’t catch up with advancements in AI

This conversation about AI regulation coincides with ongoing legislative efforts worldwide. The European Union recently enacted the Artificial Intelligence Act, aiming to categorize AI risks and prohibit unacceptable applications. Similarly, the United States saw President Biden sign an executive order promoting transparency in powerful AI models. California has also emerged as a leader in AI regulation, with lawmakers considering a multitude of relevant bills.

Altman argues that an international agency offers greater adaptability compared to national legislation. He emphasizes the rapid pace of AI development, suggesting that rigid laws would quickly become outdated. He expresses skepticism towards lawmakers’ ability to craft future-proof regulations, stating, “written in law is in 12 months it will all be written wrong.”

In simpler terms, Altman compares AI oversight to airplane safety regulations. He explains, “When significant loss of human life is a serious possibility…like airplanes…I think we’re happy to have some sort of testing framework.” His ideal scenario involves a system where users, like airplane passengers, can trust the safety of AI without needing to understand the intricate details.

Sam altman AI regulation agency 2024
The complexity of AI systems makes it hard for regulators to understand and mitigate potential risks (Image credit)

Why there isn’t a true regulation for AI being crafted yet?

Despite these ongoing efforts, crafting a truly effective regulatory framework for AI presents several challenges.

One key obstacle is the rapid pace of AI development. The field is constantly evolving, making it difficult for regulations to keep pace with technological advancements. Laws written today may be insufficient to address the risks posed by AI systems developed tomorrow.

Another challenge lies in the complexity of AI systems. These systems can be incredibly intricate and difficult to understand, even for experts. This complexity makes it challenging for regulators to identify and mitigate potential risks.

Furthermore, there’s a lack of global consensus on how to regulate AI. Different countries have varying priorities and risk tolerances when it comes to AI development. This makes it difficult to establish a unified international framework.

Finally, there’s a concern about stifling innovation. Overly restrictive regulations could hinder the development of beneficial AI applications.

Finding the right balance between safety, innovation, and international cooperation is crucial for crafting effective AI regulations.


Featured image credit: Freepik

Tags: AIFeaturedLaw

Related Posts

Anthropic releases Claude Sonnet 4.5 with advanced coding and agent capabilities

Anthropic releases Claude Sonnet 4.5 with advanced coding and agent capabilities

September 30, 2025
CESA: 51% of Japanese game firms use AI in development

CESA: 51% of Japanese game firms use AI in development

September 29, 2025
South Korea funds LG Exaone 4.0, SKT A.X for AI sovereignty

South Korea funds LG Exaone 4.0, SKT A.X for AI sovereignty

September 29, 2025
Medicare WISeR pilot uses AI for service approvals in 6 states

Medicare WISeR pilot uses AI for service approvals in 6 states

September 29, 2025
DHS uses AI to detect AI-generated child abuse material

DHS uses AI to detect AI-generated child abuse material

September 29, 2025
AI-enhanced email coding services: From design to dynamic, personalized templates in minutes

AI-enhanced email coding services: From design to dynamic, personalized templates in minutes

September 29, 2025

LATEST NEWS

YouTube settles Trump lawsuit for $24.5 million

EA sold to Saudi-backed group for $55 billion

Cross-Chain is the new competitive edge: Building secure, interoperable systems in the Web3 era

Anthropic releases Claude Sonnet 4.5 with advanced coding and agent capabilities

Why Ethereum pricing still matters in an evolving ecosystem

Medusa gang offered BBC reporter share of ransom

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.