Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI adds threat filter to its smartest models

With o3’s advanced reasoning comes new safety risks, OpenAI adds a specialized system to block hazardous bio-threat advice.

byKerem Gülen
April 17, 2025
in Artificial Intelligence, Cybersecurity, News
Home News Artificial Intelligence

OpenAI has introduced a new monitoring system for its latest AI models, o3 and o4-mini, to detect and prevent prompts related to biological and chemical threats, according to the company’s safety report. The system, described as a “safety-focused reasoning monitor,” is designed to identify potentially hazardous requests and instruct the models to refuse to provide advice.

The new AI models represent a significant capability increase over OpenAI’s previous models and pose new risks if misused by malicious actors. O3, in particular, has shown increased proficiency in answering questions related to creating certain biological threats, as per OpenAI’s internal benchmarks. To mitigate these risks, the monitoring system was custom-trained to reason about OpenAI’s content policies and runs on top of o3 and o4-mini.

OpenAI adds threat filter to its smartest models
Image: OpenAI

To develop the monitoring system, OpenAI’s red teamers spent around 1,000 hours flagging “unsafe” biorisk-related conversations from o3 and o4-mini. In a simulated test, the models declined to respond to risky prompts 98.7% of the time. However, OpenAI acknowledges that this test did not account for users who might try new prompts after being blocked, and the company will continue to rely on human monitoring.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

According to OpenAI, o3 and o4-mini do not cross the “high risk” threshold for biorisks. Still, early versions of these models proved more helpful in answering questions related to developing biological weapons compared to o1 and GPT-4. The company is actively tracking the potential risks associated with its models and is increasingly relying on automated systems to mitigate these risks.

OpenAI is using a similar reasoning monitor to prevent GPT-4o’s native image generator from creating child sexual abuse material (CSAM). However, some researchers have raised concerns that OpenAI is not prioritizing safety as much as it should, citing limited time to test o3 on a benchmark for deceptive behavior and the lack of a safety report for GPT-4.1.


Featured image credit

Tags: chatgptopenAI

Related Posts

Texas Attorney General files lawsuit over the PowerSchool data breach

Texas Attorney General files lawsuit over the PowerSchool data breach

September 5, 2025
iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

September 5, 2025
AI chatbots spread false info in 1 of 3 responses

AI chatbots spread false info in 1 of 3 responses

September 5, 2025
OpenAI to mass produce custom AI chip with Broadcom in 2025

OpenAI to mass produce custom AI chip with Broadcom in 2025

September 5, 2025
When two Mark Zuckerbergs collide

When two Mark Zuckerbergs collide

September 5, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025

LATEST NEWS

Texas Attorney General files lawsuit over the PowerSchool data breach

iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

AI chatbots spread false info in 1 of 3 responses

OpenAI to mass produce custom AI chip with Broadcom in 2025

When two Mark Zuckerbergs collide

Deepmind finds RAG limit with fixed-size embeddings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.