Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Do AI models trust their regulators?

AI regulating AI sounds good on paper until the bots realize breaking rules is easier (and cheaper).

byKerem Gülen
April 14, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

The next time someone tells you AI will help us regulate AI, you might want to pause. Because when researchers put large language models (LLMs) into a simulated regulatory environment, making them play the roles of users, developers, and regulators, the results weren’t exactly reassuring.

This new study, led by a team from Teesside University and collaborators across Europe, used evolutionary game theory to explore a fundamental question: would AI systems themselves follow the rules of AI regulation? And even more interestingly: under what conditions would they cheat?

The experiment: Three AIs walk into a boardroom

At the heart of the study is a classic three-player game setup: one player represents AI users, another AI developers, and the third a regulator. Each has simple choices: trust or don’t, comply or defect, regulate or stay hands-off.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

But instead of just running mathematical models, the researchers used real LLMs, GPT-4o from OpenAI and Mistral Large, and had them roleplay these scenarios across hundreds of games.

Sometimes it was a one-shot deal (play once, reveal your strategy). Other times it was a repeated game, where agents could learn from past behaviors.

Crucially, the researchers added realistic complications:

  • Regulation comes with costs (monitoring takes effort)
  • Developers face penalties if caught breaking rules
  • Users can trust unconditionally — or only trust if regulators have a good reputation
  • Everyone wants to maximize their payoff

The results: AI agents behave worse when users are skeptical

The headline insight? Conditional trust, when users only trust if regulators seem competent, backfired spectacularly.

When users were wary, both developers and regulators were more likely to defect. Regulation decayed. Developers cut corners. Regulators became lazy or lenient. Trust spiraled down.

But when users placed unconditional trust in the system, even without perfect evidence, developers and regulators were more likely to cooperate and build safer AI. It’s a brutal paradox: the more wary users are, the more likely the system becomes untrustworthy.

GPT-4 vs Mistral: AI personalities matter

There was another fascinating wrinkle. Different LLMs behaved differently.

  • GPT-4o leaned more optimistic. It was more likely to trust and comply, especially in repeated games where cooperation could emerge over time.
  • Mistral Large was more pessimistic. It tended to defect sooner, trusted less, and was more sensitive to regulatory costs.

This means that even the AI you choose for governance simulations could shape your conclusions — a major challenge for reproducibility in AI regulation research.

Adding personalities: The risks of tuning AI behavior

The researchers also tested what happens when you inject explicit “personalities” into the AI agents.

  • Risk-averse users trusted less.
  • Aggressive developers defected more.
  • Strict regulators improved compliance but only to a point.

Interestingly, setting specific personalities made LLM behaviors across GPT-4o and Mistral more similar. Without personalities, the AI agents defaulted to a more “pessimistic” worldview, often assuming that developers and regulators wouldn’t act in good faith.

So can AI regulate AI?

In short: only if the environment is already trusting, transparent, and well-incentivized.

The study suggests that regulation systems relying on AI agents themselves may inherit the messiness and unpredictability of human strategic behavior. It also points to a critical flaw in the idea of automating governance: AI systems will mirror the trust structures of the environment they’re placed in.

If regulators are underfunded or weak, or if users are skeptical, AI developers, human or not, will likely cut corners. Ultimately, the researchers argue that technical solutions alone won’t build trustworthy AI ecosystems. Game theory shows us that incentives, reputations, and transparency matter deeply. And their experiments show that even the smartest LLMs can’t escape those dynamics.

Their warning to policymakers is clear: regulation isn’t just about writing rules. It’s about building structures where trust is rewarded, enforcement is credible, and cutting corners is costly.


Featured image credit

Tags: AIregulation

Related Posts

AI mirrors the brain’s processing and is quietly changing human vocabulary

AI mirrors the brain’s processing and is quietly changing human vocabulary

December 11, 2025
Catching the  trillion ghost: AI is rewriting the rules of financial crime

Catching the $2 trillion ghost: AI is rewriting the rules of financial crime

December 9, 2025
LLMs show distinct cultural biases in English vs Chinese prompts

LLMs show distinct cultural biases in English vs Chinese prompts

December 9, 2025
New robot builds furniture from voice commands in 5 minutes

New robot builds furniture from voice commands in 5 minutes

December 8, 2025
Study: LLMs favor sentence structure over meaning

Study: LLMs favor sentence structure over meaning

December 5, 2025
OpenAI wants its AI to confess to hacking and breaking rules

OpenAI wants its AI to confess to hacking and breaking rules

December 4, 2025

LATEST NEWS

The Game Awards 2025: Clair Obscur sweeps Oscars of gaming amid massive announcements

Trump signs executive order limiting state AI laws

Meet the world’s smallest AI supercomputer that fits in your pocket

Samsung is building a global shutter-level sensor for the Galaxy S26

Google now lets you try on clothes virtually with just a selfie

Fortnite returns to Google Play Store after 5-year antitrust battle

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.