Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI’s ChatGPT-5 finally got the “half of knowledge”

ChatGPT-5 now says 'I don't know' to improve accuracy and transparency. The update sets a confidence threshold so the model states uncertainty rather than generating speculative or incorrect answers.

byEmre Çıtak
September 17, 2025
in Artificial Intelligence

OpenAI’s ChatGPT-5 has begun responding with “I don’t know” when it cannot confidently answer a query, a significant change from the typical chatbot behavior of providing an answer regardless of its reliability.

The new feature, which gained attention after users shared interactions on social media, is part of an effort to address the long-standing problem of AI-generated misinformation.

Addressing the problem of AI hallucinations

A persistent challenge for large language models is the issue of “hallucinations,” where the AI generates fabricated information, such as fake quotes or non-existent studies, in a confident tone. This is particularly dangerous in fields like medicine or law, where users might act on incorrect information without realizing it is unreliable. Users often accept these outputs at face value because the AI’s authoritative delivery masks the fabricated details.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

ChatGPT-5’s new approach directly counters this by opting for honesty over invention. When the model encounters a query that falls outside its training data or involves unverifiable claims, it will now state its uncertainty rather than generating a speculative or incorrect answer.

How the “I don’t know” feature works

Large language models like ChatGPT do not retrieve facts from a database. Instead, they operate by predicting the next word in a sequence based on statistical patterns learned from vast amounts of text. This method allows for fluent, human-like conversation but can also lead to plausible-sounding inaccuracies when the training data is limited on a specific topic.

OpenAI has implemented a confidence threshold into ChatGPT-5. When the model’s prediction for an answer falls below a certain reliability score, it triggers the “I don’t know” response. This mechanism prevents the model from delivering a grammatically correct but factually baseless answer. Developers calibrated these thresholds through extensive testing to balance providing helpful information with maintaining accuracy.

Building user trust by communicating limitations

The new feature is designed to build user trust by making the AI’s limitations clear. By explicitly flagging when it is uncertain, ChatGPT-5 encourages users to seek external verification and use the tool more critically. This promotes a more responsible interaction, positioning the AI as a helpful assistant rather than an infallible source of information.

This move toward greater transparency aligns with a broader industry trend, as other companies like Google’s Gemini and Anthropic’s Claude are also exploring ways to build similar safeguards into their AI models. The admission of uncertainty mirrors how human experts operate, who often acknowledge the limits of their knowledge and consult other sources. The feature represents a step toward more nuanced and responsible AI systems that can communicate their boundaries effectively.


Featured image credit

Tags: chatgptFeaturedgpt-5openAI

Related Posts

OpenAI’s Sora hits 470,000 Android installs on day one

OpenAI’s Sora hits 470,000 Android installs on day one

November 7, 2025
Elon Musk says Tesla may need a “gigantic” chip factory for its AI ambitions

Elon Musk says Tesla may need a “gigantic” chip factory for its AI ambitions

November 7, 2025
BMW integrates Alexa+ for true in-car conversations

BMW integrates Alexa+ for true in-car conversations

November 7, 2025
Kindle Translate lets authors publish AI-translated books for free

Kindle Translate lets authors publish AI-translated books for free

November 7, 2025
ChatGPT adds Peloton and Tripadvisor to its growing app ecosystem

ChatGPT adds Peloton and Tripadvisor to its growing app ecosystem

November 7, 2025
Google Maps integrates Gemini for hands-free navigation

Google Maps integrates Gemini for hands-free navigation

November 6, 2025

LATEST NEWS

Tech News Today: Sora’s video tricks and the invisible bug that defines Android’s power

OpenAI’s Sora hits 470,000 Android installs on day one

Mastodon adds quote posts in major 4.5 update with built-in safeguards

Elon Musk says Tesla may need a “gigantic” chip factory for its AI ambitions

BMW integrates Alexa+ for true in-car conversations

This Samsung Galaxy phone needs and immediate update

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.