Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI struggles with strategy: Study shows LLMs reveal too much in social deduction games

The results suggest that while AI can identify deception, it struggles to withhold critical information, making it ill-suited for adversarial scenarios where discretion is key

byKerem Gülen
February 3, 2025
in Research

Large language models (LLMs) like GPT-4, Gemini 1.5, and Claude 3.5 have made strides in reasoning, dialogue, and even negotiation. But when placed in a strategic setting that demands secrecy and deception, these AI agents show a significant weakness: they can’t keep a secret.

A new study from researchers Mustafa O. Karabag and Ufuk Topcu at the University of Texas at Austin put LLMs to the test using The Chameleon, a hidden-identity board game where players must strategically reveal, conceal, and infer information. The results suggest that while AI can identify deception, it struggles to withhold critical information, making it ill-suited for adversarial scenarios where discretion is key.

AI plays The Chameleon game—and fails at strategy

In The Chameleon, a group of players receives a secret word, except for one—the Chameleon—who must deduce the secret based on the group’s responses. The non-chameleon players must balance revealing enough to recognize one another while keeping the Chameleon in the dark. The game demands a fine-tuned approach to information sharing: too much, and the Chameleon guesses the word; too little, and the group fails to identify the Chameleon.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

When AI agents took on the roles, their strategic instincts fell apart. While LLMs like GPT-4o were able to correctly identify the Chameleon 61% of the time, their ability to keep the secret word concealed was dismal. The study found that Chameleon AIs could infer the secret word with an astonishing 87% success rate—far higher than expected.


AI can now click, scroll, and type for you—but is that a good thing?


Theoretical models confirm AI’s over-sharing problem

To understand these failures, researchers developed mathematical models predicting optimal strategies for concealing and revealing information. Theoretically, non-chameleon players should win roughly 23% of the time, even if they ignored the secret word entirely. Instead, AI non-chameleons won only 6% of games, suggesting they were leaking too much information.

“AI agents instinctively over-reveal,” the study concludes. When LLMs were given a chance to devise their own strategies, they tended toward excessive disclosure, making it easy for the Chameleon to deduce the secret. This aligns with findings from other social deduction games, such as Spyfall, where AI players have been shown to rely too much on keywords, unintentionally tipping off opponents.

Why AI can’t keep secrets

The researchers hypothesize that AI’s fundamental training makes it ill-suited for secrecy. LLMs are designed to maximize coherence and helpfulness, which often means completing patterns rather than strategically withholding information. This associative tendency directly conflicts with the need to obscure key details in adversarial interactions.

To confirm this, researchers ran additional tests using web search result counts to track how much information AI players were inadvertently revealing. Even after just one response from a non-chameleon LLM, the Chameleon AI could already guess the secret word with a 40% probability—showing that the AI’s responses carried far more information than intended.

When too much information becomes a liability for AI

If LLMs struggle with strategic discretion in controlled environments, how will they handle real-world scenarios where information concealment is critical? Applications in cybersecurity, diplomacy, or competitive business intelligence may require AI systems to operate with far greater nuance.

To address this, AI developers may need to train models with a stronger focus on strategic ambiguity, reducing their instinct to over-disclose. Techniques such as adversarial reinforcement learning or explicit deception training could help balance AI’s ability to infer information without immediately giving away the game.

For now, though, AI remains a poor poker player. While it may be great at spotting deception, its inability to keep secrets means it’s still not ready for the world of high-stakes strategic reasoning.


Featured image credit: Kerem Gülen/Midjourney

Tags: AIFeatured

Related Posts

Just 250 bad documents can poison a massive AI model

Just 250 bad documents can poison a massive AI model

October 15, 2025
71% of workers are using rogue AI tools at work, Microsoft warns

71% of workers are using rogue AI tools at work, Microsoft warns

October 14, 2025
Google taught your voice assistant to understand what you mean

Google taught your voice assistant to understand what you mean

October 14, 2025
Apple researchers just made AI text generation 128x faster

Apple researchers just made AI text generation 128x faster

October 13, 2025
Have astronomers finally found the universe’s first dark stars?

Have astronomers finally found the universe’s first dark stars?

October 10, 2025
KPMG: CEOs prioritize AI investment in 2025

KPMG: CEOs prioritize AI investment in 2025

October 9, 2025

LATEST NEWS

Microsoft’s biggest-ever Patch Tuesday fixes 175 bugs

Jensen Huang says every Nvidia engineer now codes with Cursor

Apple unveils new iPad Pro with the M5 chip

Apple Vision Pro gets M5 chip upgrade and PS VR2 controller support

Attackers used AI prompts to silently exfiltrate code from GitHub repositories

Android 16 now shows which apps sneak in your security settings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.