Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI evolves from controversial leader to safety advocate

CEO Sam Altman revealed that OpenAI will provide the U.S. AI Safety Institute with early access to its next major generative AI model for safety testing

byEmre Çıtak
August 1, 2024
in News
Home News

OpenAI, the company behind ChatGPT, is taking steps to address concerns about AI safety and governance.

CEO Sam Altman recently announced that OpenAI is working with the U.S. AI Safety Institute to provide early access to its next major generative AI model for safety testing.

The move comes amid growing scrutiny of OpenAI’s commitment to AI safety and its influence on policy-making.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

a few quick updates about safety at openai:

as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.

our team has been working with the US AI Safety Institute on an agreement where we would provide…

— Sam Altman (@sama) August 1, 2024

Collaboration with the U.S. AI Safety Institute

The U.S. AI Safety Institute, a federal body aimed at assessing and addressing risks in AI platforms, will have the opportunity to test OpenAI’s upcoming AI model before its public release. While details of the agreement are scarce, this collaboration represents a significant step towards increased transparency and external oversight of AI development.

The partnership follows a similar deal OpenAI struck with the UK’s AI safety body in June, suggesting a pattern of engagement with government entities on AI safety issues.

OpenAI US AI Safety Institute
The partnership follows a similar agreement with the UK’s AI safety body in June (Image credit)

Addressing safety concerns

OpenAI’s recent actions appear to be a response to criticism regarding its perceived deprioritization of AI safety research. The company previously disbanded a unit working on controls for “superintelligent” AI systems, leading to high-profile resignations and public scrutiny.

In an effort to rebuild trust, OpenAI has:

  1. Eliminated restrictive non-disparagement clauses.
  2. Created a safety commission.
  3. Pledged 20% of its compute resources to safety research.

However, some observers remain skeptical, particularly after OpenAI staffed its safety commission with company insiders and reassigned a top AI safety executive.

Influence on AI policy

OpenAI’s engagement with government bodies and its endorsement of the Future of Innovation Act has raised questions about the company’s influence on AI policymaking. The timing of these moves, coupled with OpenAI’s increased lobbying efforts, has led to speculation about potential regulatory capture.


Machine unlearning: Can AI really forget?


Altman’s position on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board further underscores the company’s growing involvement in shaping AI policy.

Looking ahead

As AI technology continues to advance rapidly, the balance between innovation and safety remains a critical concern. OpenAI’s collaboration with the U.S. AI Safety Institute represents a step towards more transparent and responsible AI development.

However, it also highlights the complex relationship between tech companies and regulatory bodies in shaping the future of AI governance.

The tech community and policymakers will be watching closely to see how this partnership unfolds and what impact it will have on the broader landscape of AI safety and regulation.


Featured image credit: Kim Menikh/Unsplash

Tags: FeaturedopenAI

Related Posts

Zoom announces AI Companion 3.0 at Zoomtopia

Zoom announces AI Companion 3.0 at Zoomtopia

September 19, 2025
Google Cloud adds Lovable and Windsurf as AI coding customers

Google Cloud adds Lovable and Windsurf as AI coding customers

September 19, 2025
Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

September 19, 2025
Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

September 19, 2025
DeepSeek releases R1 model trained for 4,000 on 512 H800 GPUs

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

September 19, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.