Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Why we might lose our only window into how AI thinks

A new paper from top AI labs proposes monitoring an AI's "chain of thought" to detect harmful intentions before they act

byKerem Gülen
July 17, 2025
in Research
Home Research

A paper titled “Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety” proposes a method for improving AI safety by monitoring the internal reasoning of AI models. The research is a collaborative effort from dozens of experts across the UK AI Security Institute, Apollo Research, Google DeepMind, OpenAI, Anthropic, Meta, and several universities. It suggests that analyzing an AI’s “chain of thought,” or its process of thinking out loud in text, offers a unique but potentially fragile way to detect harmful intentions.

A window into the AI’s mind

Modern AI models can be prompted to produce a chain of thought (CoT), which is a textual output of their reasoning steps before they give a final answer. For models explicitly trained to perform extended reasoning, this CoT can act like a look into the model’s internal computational process. An automated system, or CoT monitor, could read this text to flag suspicious or harmful plans. The paper states that for very difficult tasks, AI models built on the Transformer architecture must use their chain of thought as a form of working memory, making their reasoning steps visible. In other cases, models may have a natural tendency to think out loud even when it is not strictly necessary.

This approach has already been used in safety research to find undesirable model behaviors. Researchers have observed models explicitly stating intentions in their CoT, such as “Let’s hack” or “Let’s sabotage,” which allows monitors to catch misbehavior more effectively than by only observing the final action. It can also help discover a model’s underlying goals and identify flaws in AI evaluation methods, such as when a model knows it is being tested.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.


AI learns language like a kid learns to read


Why this window could close

The paper warns that the ability to monitor AI chains of thought is fragile and could be lost due to future development practices. The authors identify several ways monitorability could be degraded:

  • Drift from legible reasoning: As models are trained more intensely with reinforcement learning based on outcomes rather than human-generated data, their chains of thought could drift from legible English into a more optimized, unreadable format.
  • Direct supervision of CoT: If developers directly train the chain of thought to be shorter, use a certain style, or avoid specific content, it might no longer be a faithful reflection of the model’s actual reasoning process.
  • Indirect optimization pressure: Training a model’s final output to look good to a preference model could indirectly put pressure on the CoT to also appear benign, even if the underlying reasoning is not.
  • Novel architectures: Future AI architectures might perform complex reasoning internally in a continuous latent space without needing to verbalize their thoughts, which would eliminate the safety advantages of CoT monitoring.

The authors recommend that AI researchers develop standardized evaluations to measure CoT monitorability and study how different training pressures affect it. For frontier AI developers, the paper suggests they should track the monitorability of their models, publish the results, and use these scores when making decisions about training and deployment. For example, developers might choose an earlier model checkpoint if monitorability degrades during training or justify a small decrease in monitorability if it results from a process that dramatically improves the model’s alignment.


Featured image credit

Tags: AI

Related Posts

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.