Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI chief pushes for international body to oversee powerful AI

Sam Altman advocates for an international agency to monitor powerful AI

byEmre Çıtak
May 13, 2024
in Artificial Intelligence
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

The rapid development of Artificial Intelligence (AI) has sparked a global conversation about its potential impact and the need for responsible development. A key aspect of this conversation centers on AI regulation, with experts grappling with how to ensure AI is used safely and ethically.

One prominent voice in this discussion is Sam Altman, the CEO of OpenAI, a research company focused on developing safe and beneficial artificial general intelligence.

In a recent podcast interview, Altman advocated for the establishment of an international agency to monitor and ensure the “reasonable safety” of powerful AI systems.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Why an international agency?

Altman’s proposal for an international agency stems from his belief that the most powerful AI systems will have the potential to cause significant global harm. He argues that the negative impacts of such advanced AI could transcend national borders, making it difficult for individual countries to effectively regulate them on their own.

Altman, speaking on the All-In podcast, expressed concern about the near future, stating, “there will come a time…where frontier AI systems are capable of causing significant global harm”.

Sam altman AI regulation agency 2024
The rapid development of AI within the first quarter of 2024 has sparked a global conversation about safe and ethical AI use (Image credit)

He envisions an international agency specifically focused on “looking at the most powerful systems and ensuring reasonable safety testing”.

However, Altman acknowledges the need for a balanced approach. He emphasizes the dangers of “regulatory overreach,” seeking a framework that avoids excessive restrictions while still mitigating risks. He highlights the potential pitfalls of both under-regulation and over-regulation.

Laws can’t catch up with advancements in AI

This conversation about AI regulation coincides with ongoing legislative efforts worldwide. The European Union recently enacted the Artificial Intelligence Act, aiming to categorize AI risks and prohibit unacceptable applications. Similarly, the United States saw President Biden sign an executive order promoting transparency in powerful AI models. California has also emerged as a leader in AI regulation, with lawmakers considering a multitude of relevant bills.

Altman argues that an international agency offers greater adaptability compared to national legislation. He emphasizes the rapid pace of AI development, suggesting that rigid laws would quickly become outdated. He expresses skepticism towards lawmakers’ ability to craft future-proof regulations, stating, “written in law is in 12 months it will all be written wrong.”

In simpler terms, Altman compares AI oversight to airplane safety regulations. He explains, “When significant loss of human life is a serious possibility…like airplanes…I think we’re happy to have some sort of testing framework.” His ideal scenario involves a system where users, like airplane passengers, can trust the safety of AI without needing to understand the intricate details.

Sam altman AI regulation agency 2024
The complexity of AI systems makes it hard for regulators to understand and mitigate potential risks (Image credit)

Why there isn’t a true regulation for AI being crafted yet?

Despite these ongoing efforts, crafting a truly effective regulatory framework for AI presents several challenges.

One key obstacle is the rapid pace of AI development. The field is constantly evolving, making it difficult for regulations to keep pace with technological advancements. Laws written today may be insufficient to address the risks posed by AI systems developed tomorrow.

Another challenge lies in the complexity of AI systems. These systems can be incredibly intricate and difficult to understand, even for experts. This complexity makes it challenging for regulators to identify and mitigate potential risks.

Furthermore, there’s a lack of global consensus on how to regulate AI. Different countries have varying priorities and risk tolerances when it comes to AI development. This makes it difficult to establish a unified international framework.

Finally, there’s a concern about stifling innovation. Overly restrictive regulations could hinder the development of beneficial AI applications.

Finding the right balance between safety, innovation, and international cooperation is crucial for crafting effective AI regulations.


Featured image credit: Freepik

Tags: AIFeaturedLawSam Altrman

Related Posts

Anthropic has unveiled its new Claude 4 series AI models

Anthropic has unveiled its new Claude 4 series AI models

May 23, 2025
AI might be hallucinating less than humans do

AI might be hallucinating less than humans do

May 23, 2025
OpenAI is now planning a new screenless AI companion device

OpenAI is now planning a new screenless AI companion device

May 22, 2025
Google’s AI just got ad-ified

Google’s AI just got ad-ified

May 22, 2025
The Llama for Startups initiative could fuel a whole new wave of GenAI apps

The Llama for Startups initiative could fuel a whole new wave of GenAI apps

May 22, 2025
Amazon tests AI voiceovers for its product listings

Amazon tests AI voiceovers for its product listings

May 22, 2025

LATEST NEWS

Anthropic has unveiled its new Claude 4 series AI models

AI might be hallucinating less than humans do

Microsoft’s long and patient hunt for the Lumma Stealer malware finally paid off big

Xiaomi just debuted its powerful in-house Xring O1 processor

Apple reportedly plans to release its smart glasses in 2026

OpenAI is now planning a new screenless AI companion device

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.