Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI’s ChatGPT-5 finally got the “half of knowledge”

ChatGPT-5 now says 'I don't know' to improve accuracy and transparency. The update sets a confidence threshold so the model states uncertainty rather than generating speculative or incorrect answers.

byEmre Çıtak
September 17, 2025
in Artificial Intelligence
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

OpenAI’s ChatGPT-5 has begun responding with “I don’t know” when it cannot confidently answer a query, a significant change from the typical chatbot behavior of providing an answer regardless of its reliability.

The new feature, which gained attention after users shared interactions on social media, is part of an effort to address the long-standing problem of AI-generated misinformation.

Addressing the problem of AI hallucinations

A persistent challenge for large language models is the issue of “hallucinations,” where the AI generates fabricated information, such as fake quotes or non-existent studies, in a confident tone. This is particularly dangerous in fields like medicine or law, where users might act on incorrect information without realizing it is unreliable. Users often accept these outputs at face value because the AI’s authoritative delivery masks the fabricated details.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

ChatGPT-5’s new approach directly counters this by opting for honesty over invention. When the model encounters a query that falls outside its training data or involves unverifiable claims, it will now state its uncertainty rather than generating a speculative or incorrect answer.

How the “I don’t know” feature works

Large language models like ChatGPT do not retrieve facts from a database. Instead, they operate by predicting the next word in a sequence based on statistical patterns learned from vast amounts of text. This method allows for fluent, human-like conversation but can also lead to plausible-sounding inaccuracies when the training data is limited on a specific topic.

OpenAI has implemented a confidence threshold into ChatGPT-5. When the model’s prediction for an answer falls below a certain reliability score, it triggers the “I don’t know” response. This mechanism prevents the model from delivering a grammatically correct but factually baseless answer. Developers calibrated these thresholds through extensive testing to balance providing helpful information with maintaining accuracy.

Building user trust by communicating limitations

The new feature is designed to build user trust by making the AI’s limitations clear. By explicitly flagging when it is uncertain, ChatGPT-5 encourages users to seek external verification and use the tool more critically. This promotes a more responsible interaction, positioning the AI as a helpful assistant rather than an infallible source of information.

This move toward greater transparency aligns with a broader industry trend, as other companies like Google’s Gemini and Anthropic’s Claude are also exploring ways to build similar safeguards into their AI models. The admission of uncertainty mirrors how human experts operate, who often acknowledge the limits of their knowledge and consult other sources. The feature represents a step toward more nuanced and responsible AI systems that can communicate their boundaries effectively.


Featured image credit

Tags: chatgptFeaturedgpt-5openAI

Related Posts

You can now use GPT-5 and Claude together in one chaotic thread

You can now use GPT-5 and Claude together in one chaotic thread

November 19, 2025
Google launches WeatherNext 2 with FGN architecture

Google launches WeatherNext 2 with FGN architecture

November 18, 2025
Google announces its agent-first coding tool Antigravity

Google announces its agent-first coding tool Antigravity

November 18, 2025
Google released Gemini 3 Pro with Gemini Agent

Google released Gemini 3 Pro with Gemini Agent

November 18, 2025
Google expands Flight Deals to 200 countries as AI Mode adds full travel planning

Google expands Flight Deals to 200 countries as AI Mode adds full travel planning

November 18, 2025
Google says Robby Starbuck induced AI hallucinations in defamation case

Google says Robby Starbuck induced AI hallucinations in defamation case

November 18, 2025

LATEST NEWS

Why you have to wait until 2027 for the next real F1 game

Cloudflare admits a bot filter bug caused its worst outage since 2019

Snapchat now lets you talk to strangers without exposing your real profile

You can now use GPT-5 and Claude together in one chaotic thread

You can finally tell TikTok to stop showing you fake AI videos

Atomico report shows EU tech is lobbying harder than ever

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.