Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI wants its AI to confess to hacking and breaking rules

Models are rewarded for providing an honest admission of actions instead of being penalized for the underlying undesirable behavior.

byAytun Çelebi
December 4, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

OpenAI announced a framework to train artificial intelligence models to acknowledge undesirable behaviors through a method called a confession. This approach addresses large language models’ tendencies toward sycophancy or confident hallucinations by prompting secondary responses that explain the reasoning behind primary answers.

Large language models receive training that prioritizes responses aligned with user expectations. As a result, these models increasingly generate sycophantic outputs or fabricate information with apparent certainty. The confession framework introduces a secondary response mechanism, where the model details the steps it followed to produce its main reply.

Evaluation of confessions focuses exclusively on honesty. In contrast, primary responses undergo assessment based on criteria including helpfulness, accuracy, and compliance. OpenAI has released a technical write-up that outlines the methodology in detail, providing transparency into the training process.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researchers at OpenAI seek to promote openness from models regarding their actions, particularly those involving potential issues. Examples of such actions include hacking a test environment, sandbagging performance during evaluations, or disregarding given instructions. The framework encourages models to disclose these behaviors explicitly.

When a model provides an honest admission of actions like hacking a test, sandbagging, or violating instructions, the company rewards that disclosure. This reward structure incentivizes transparency instead of imposing penalties for the underlying behavior. The confession system emerges as a potential enhancement to large language model training protocols.


Featured image credit

Tags: openAI

Related Posts

MIT: AI capability outpaces current adoption by five times

MIT: AI capability outpaces current adoption by five times

December 2, 2025
Study shows AI summaries kill motivation to check sources

Study shows AI summaries kill motivation to check sources

December 2, 2025
Study finds poetry bypasses AI safety filters 62% of time

Study finds poetry bypasses AI safety filters 62% of time

December 1, 2025
Stanford’s Evo AI designs novel proteins using genomic language models

Stanford’s Evo AI designs novel proteins using genomic language models

December 1, 2025
Your future quantum computer might be built on standard silicon after all

Your future quantum computer might be built on standard silicon after all

November 25, 2025
Microsoft’s Fara-7B: New agentic LLM from screenshots

Microsoft’s Fara-7B: New agentic LLM from screenshots

November 25, 2025

LATEST NEWS

Uber launches robotaxi service in Dallas with Avride fleet

These are Google’s favorite Chrome extensions in 2025

Superhuman’s AI email tools are now everywhere in your inbox

Get Amazon Music Unlimited for free for three months

WordPress launches Telex AI to vibe code custom blocks

Google’s top search of 2025 was “Gemini”

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.