Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI evolves from controversial leader to safety advocate

CEO Sam Altman revealed that OpenAI will provide the U.S. AI Safety Institute with early access to its next major generative AI model for safety testing

byEmre Çıtak
August 1, 2024
in News
Home News
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

OpenAI, the company behind ChatGPT, is taking steps to address concerns about AI safety and governance.

CEO Sam Altman recently announced that OpenAI is working with the U.S. AI Safety Institute to provide early access to its next major generative AI model for safety testing.

The move comes amid growing scrutiny of OpenAI’s commitment to AI safety and its influence on policy-making.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

a few quick updates about safety at openai:

as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.

our team has been working with the US AI Safety Institute on an agreement where we would provide…

— Sam Altman (@sama) August 1, 2024

Collaboration with the U.S. AI Safety Institute

The U.S. AI Safety Institute, a federal body aimed at assessing and addressing risks in AI platforms, will have the opportunity to test OpenAI’s upcoming AI model before its public release. While details of the agreement are scarce, this collaboration represents a significant step towards increased transparency and external oversight of AI development.

The partnership follows a similar deal OpenAI struck with the UK’s AI safety body in June, suggesting a pattern of engagement with government entities on AI safety issues.

OpenAI US AI Safety Institute
The partnership follows a similar agreement with the UK’s AI safety body in June (Image credit)

Addressing safety concerns

OpenAI’s recent actions appear to be a response to criticism regarding its perceived deprioritization of AI safety research. The company previously disbanded a unit working on controls for “superintelligent” AI systems, leading to high-profile resignations and public scrutiny.

In an effort to rebuild trust, OpenAI has:

  1. Eliminated restrictive non-disparagement clauses.
  2. Created a safety commission.
  3. Pledged 20% of its compute resources to safety research.

However, some observers remain skeptical, particularly after OpenAI staffed its safety commission with company insiders and reassigned a top AI safety executive.

Influence on AI policy

OpenAI’s engagement with government bodies and its endorsement of the Future of Innovation Act has raised questions about the company’s influence on AI policymaking. The timing of these moves, coupled with OpenAI’s increased lobbying efforts, has led to speculation about potential regulatory capture.


Machine unlearning: Can AI really forget?


Altman’s position on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board further underscores the company’s growing involvement in shaping AI policy.

Looking ahead

As AI technology continues to advance rapidly, the balance between innovation and safety remains a critical concern. OpenAI’s collaboration with the U.S. AI Safety Institute represents a step towards more transparent and responsible AI development.

However, it also highlights the complex relationship between tech companies and regulatory bodies in shaping the future of AI governance.

The tech community and policymakers will be watching closely to see how this partnership unfolds and what impact it will have on the broader landscape of AI safety and regulation.


Featured image credit: Kim Menikh/Unsplash

Tags: FeaturedopenAI

Related Posts

Playlist migration arrives on Spotify months after Apple’s debut

Playlist migration arrives on Spotify months after Apple’s debut

November 24, 2025
Samsung resurrects the Galaxy A77 to fill the gap it created 3 years ago

Samsung resurrects the Galaxy A77 to fill the gap it created 3 years ago

November 24, 2025
New leak shows Google plans to let Gemini read your NotebookLM files

New leak shows Google plans to let Gemini read your NotebookLM files

November 24, 2025
Perplexity brings its AI browser Comet to Android

Perplexity brings its AI browser Comet to Android

November 21, 2025
Google claims Nano Banana Pro can finally render legible text on posters

Google claims Nano Banana Pro can finally render legible text on posters

November 21, 2025
Apple wants you to chain Mac Studios together to build AI clusters

Apple wants you to chain Mac Studios together to build AI clusters

November 21, 2025

LATEST NEWS

Playlist migration arrives on Spotify months after Apple’s debut

Samsung resurrects the Galaxy A77 to fill the gap it created 3 years ago

New leak shows Google plans to let Gemini read your NotebookLM files

Perplexity brings its AI browser Comet to Android

Google claims Nano Banana Pro can finally render legible text on posters

Apple wants you to chain Mac Studios together to build AI clusters

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.