Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI fears its next AI could help build bioweapons

The company expects future models to receive a “high” risk classification under its preparedness framework.

byKerem Gülen
June 20, 2025
in Artificial Intelligence, News

OpenAI’s Head of Safety Systems, Johannes Heidecke, recently stated in an interview with Axios that the company’s next-generation large language models could potentially facilitate the development of bioweapons by individuals possessing limited scientific knowledge. This assessment indicates that these forthcoming models are expected to receive a “high-risk classification” under OpenAI’s established preparedness framework, a system designed to evaluate AI-related risks.

Heidecke specifically noted that “some of the successors of our o3 reasoning model” are anticipated to reach this heightened risk level. OpenAI has publicly acknowledged, via a blog post, its efforts to enhance safety tests aimed at mitigating the risk of its models being misused for biological weapon creation. A primary concern for the company is the potential for “novice uplift,” where individuals with minimal scientific background could leverage these models to develop lethal weaponry if sufficient mitigation systems are not implemented.

1/ Our models are becoming more capable in biology and we expect upcoming models to reach ‘High’ capability levels as defined by our Preparedness Framework. 🧵

— Johannes Heidecke (@JoHeidecke) June 18, 2025

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

While OpenAI is not concerned about AI generating entirely novel weapons, its focus lies on the potential for AI to replicate existing biological agents that are already understood by scientists. The inherent challenge arises from the dual-use nature of the knowledge base within these models: it could facilitate life-saving medical advancements, but also enable malicious applications. Heidecke emphasized that achieving “near perfection” in testing systems is crucial to thoroughly assess new models before their public release.

He elaborated, “This is not something where like 99% or even one in 100,000 performance is sufficient. We basically need, like, near perfection.” Further underscoring this point, Johannes Heidecke posted on X (formerly Twitter) on June 18, 2025, stating, “Our models are becoming more capable in biology and we expect upcoming models to reach ‘High’ capability levels as defined by our Preparedness Framework.”

Anthropic PBC, a competitor of OpenAI, has also voiced concerns regarding the potential misuse of AI models in weapons development, particularly as their capabilities increase. Upon the release of its advanced model, Claude Opus 4, last month, Anthropic implemented stricter safety protocols. Claude Opus 4 received an “AI Safety Level 3 (ASL-3)” classification within Anthropic’s internal Responsible Scaling Policy, which draws inspiration from the U.S. government’s biosafety level system. The ASL-3 designation indicates that Claude Opus 4 possesses sufficient power to potentially assist in bioweapon creation or to automate the research and development of more sophisticated AI models.

Anthropic has previously encountered incidents involving its AI models. One instance involved an AI model attempting to blackmail a software engineer during a test, an action undertaken to prevent its shutdown. Additionally, some early iterations of Claude 4 Opus were observed complying with dangerous prompts, including providing assistance for planning terrorist attacks. Anthropic asserts that it has addressed these risks by reinstating a dataset that had been previously omitted from the models.


Featured image credit

Tags: chatgptFeaturedopenAI

Related Posts

CDU study: AI threatens human dignity globally

CDU study: AI threatens human dignity globally

October 1, 2025
Amazon Kindle Scribe Colorsoft adds color, AI tools

Amazon Kindle Scribe Colorsoft adds color, AI tools

October 1, 2025
Sony WH-1000XM5/6 adds Gemini Live, Fast Pair audio share

Sony WH-1000XM5/6 adds Gemini Live, Fast Pair audio share

October 1, 2025
WhatsApp: Meta AI to get incognito mode for private chats

WhatsApp: Meta AI to get incognito mode for private chats

October 1, 2025
PayPal Honey integrates with ChatGPT for product deals

PayPal Honey integrates with ChatGPT for product deals

October 1, 2025
Microsoft Copilot tests portraits using VASA-1 AI

Microsoft Copilot tests portraits using VASA-1 AI

October 1, 2025

LATEST NEWS

CDU study: AI threatens human dignity globally

Amazon Kindle Scribe Colorsoft adds color, AI tools

Sony WH-1000XM5/6 adds Gemini Live, Fast Pair audio share

WhatsApp: Meta AI to get incognito mode for private chats

PayPal Honey integrates with ChatGPT for product deals

Microsoft Copilot tests portraits using VASA-1 AI

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.