Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI adds threat filter to its smartest models

With o3’s advanced reasoning comes new safety risks, OpenAI adds a specialized system to block hazardous bio-threat advice.

byKerem Gülen
April 17, 2025
in Artificial Intelligence, Cybersecurity, News

OpenAI has introduced a new monitoring system for its latest AI models, o3 and o4-mini, to detect and prevent prompts related to biological and chemical threats, according to the company’s safety report. The system, described as a “safety-focused reasoning monitor,” is designed to identify potentially hazardous requests and instruct the models to refuse to provide advice.

The new AI models represent a significant capability increase over OpenAI’s previous models and pose new risks if misused by malicious actors. O3, in particular, has shown increased proficiency in answering questions related to creating certain biological threats, as per OpenAI’s internal benchmarks. To mitigate these risks, the monitoring system was custom-trained to reason about OpenAI’s content policies and runs on top of o3 and o4-mini.

OpenAI adds threat filter to its smartest models
Image: OpenAI

To develop the monitoring system, OpenAI’s red teamers spent around 1,000 hours flagging “unsafe” biorisk-related conversations from o3 and o4-mini. In a simulated test, the models declined to respond to risky prompts 98.7% of the time. However, OpenAI acknowledges that this test did not account for users who might try new prompts after being blocked, and the company will continue to rely on human monitoring.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

According to OpenAI, o3 and o4-mini do not cross the “high risk” threshold for biorisks. Still, early versions of these models proved more helpful in answering questions related to developing biological weapons compared to o1 and GPT-4. The company is actively tracking the potential risks associated with its models and is increasingly relying on automated systems to mitigate these risks.

OpenAI is using a similar reasoning monitor to prevent GPT-4o’s native image generator from creating child sexual abuse material (CSAM). However, some researchers have raised concerns that OpenAI is not prioritizing safety as much as it should, citing limited time to test o3 on a benchmark for deceptive behavior and the lack of a safety report for GPT-4.1.


Featured image credit

Tags: chatgptopenAI

Related Posts

ChatGPT reportedly reduces reliance on Reddit as a data source

ChatGPT reportedly reduces reliance on Reddit as a data source

October 3, 2025
Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

October 3, 2025
Light-powered chip makes AI computation 100 times more efficient

Light-powered chip makes AI computation 100 times more efficient

October 3, 2025
Free and effective anti-robocall tools are now available

Free and effective anti-robocall tools are now available

October 3, 2025
Choosing the right Web3 server: OVHcloud options for startups to enterprises

Choosing the right Web3 server: OVHcloud options for startups to enterprises

October 3, 2025
Z.AI GLM-4.6 boosts context window to 200K tokens

Z.AI GLM-4.6 boosts context window to 200K tokens

October 2, 2025

LATEST NEWS

ChatGPT reportedly reduces reliance on Reddit as a data source

Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

Light-powered chip makes AI computation 100 times more efficient

Free and effective anti-robocall tools are now available

Choosing the right Web3 server: OVHcloud options for startups to enterprises

Z.AI GLM-4.6 boosts context window to 200K tokens

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.