A study on OpenAI’s ChatGPT-5 model determined it generates incorrect answers in approximately 25% of cases. The research attributes these inaccuracies to inherent limitations within the model’s training data and its probabilistic reasoning architecture, as detailed in a Tom’s Guide report.
The model demonstrates a notable reduction in errors compared to its predecessor, GPT-4, registering 45% fewer factual mistakes and six times fewer instances of “hallucinated,” or entirely fabricated, answers. Despite these advancements, the study confirms that ChatGPT-5 can still exhibit overconfidence, a phenomenon where it presents factually incorrect information with a high degree of certainty. This persistence of hallucination, though diminished, remains a core issue affecting its reliability.
Performance accuracy varies significantly depending on the specific domain of the task. For example, the model achieved a 94.6% accuracy score on the 2025 AIME mathematics test and a 74.9% success rate on a set of real-world coding assignments. The research indicates that errors become more prevalent in tasks that involve general knowledge or require complex, multi-step reasoning, where the model’s performance is less consistent.
When evaluated against the MMLU Pro benchmark, a rigorous academic test covering a wide range of subjects including science, mathematics, and history, ChatGPT-5 scored approximately 87% accuracy. The study identifies several underlying causes for the remaining errors. These include an inability to fully comprehend nuanced questions, reliance on training data that may be outdated or incomplete, and the model’s fundamental design as a probabilistic pattern-prediction mechanism, which can generate responses that are plausible but not factually correct.
Based on these findings, the report advises users to independently verify any critical information produced by ChatGPT-5. This recommendation is especially pertinent for professional, academic, or health-related inquiries where precision is essential. The consistent error rate, even with marked improvements, underscores the need for cautious use and external validation of the model’s outputs.