Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Do AI models trust their regulators?

AI regulating AI sounds good on paper until the bots realize breaking rules is easier (and cheaper).

byKerem Gülen
April 14, 2025
in Research

The next time someone tells you AI will help us regulate AI, you might want to pause. Because when researchers put large language models (LLMs) into a simulated regulatory environment, making them play the roles of users, developers, and regulators, the results weren’t exactly reassuring.

This new study, led by a team from Teesside University and collaborators across Europe, used evolutionary game theory to explore a fundamental question: would AI systems themselves follow the rules of AI regulation? And even more interestingly: under what conditions would they cheat?

The experiment: Three AIs walk into a boardroom

At the heart of the study is a classic three-player game setup: one player represents AI users, another AI developers, and the third a regulator. Each has simple choices: trust or don’t, comply or defect, regulate or stay hands-off.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

But instead of just running mathematical models, the researchers used real LLMs, GPT-4o from OpenAI and Mistral Large, and had them roleplay these scenarios across hundreds of games.

Sometimes it was a one-shot deal (play once, reveal your strategy). Other times it was a repeated game, where agents could learn from past behaviors.

Crucially, the researchers added realistic complications:

  • Regulation comes with costs (monitoring takes effort)
  • Developers face penalties if caught breaking rules
  • Users can trust unconditionally — or only trust if regulators have a good reputation
  • Everyone wants to maximize their payoff

The results: AI agents behave worse when users are skeptical

The headline insight? Conditional trust, when users only trust if regulators seem competent, backfired spectacularly.

When users were wary, both developers and regulators were more likely to defect. Regulation decayed. Developers cut corners. Regulators became lazy or lenient. Trust spiraled down.

But when users placed unconditional trust in the system, even without perfect evidence, developers and regulators were more likely to cooperate and build safer AI. It’s a brutal paradox: the more wary users are, the more likely the system becomes untrustworthy.

GPT-4 vs Mistral: AI personalities matter

There was another fascinating wrinkle. Different LLMs behaved differently.

  • GPT-4o leaned more optimistic. It was more likely to trust and comply, especially in repeated games where cooperation could emerge over time.
  • Mistral Large was more pessimistic. It tended to defect sooner, trusted less, and was more sensitive to regulatory costs.

This means that even the AI you choose for governance simulations could shape your conclusions — a major challenge for reproducibility in AI regulation research.

Adding personalities: The risks of tuning AI behavior

The researchers also tested what happens when you inject explicit “personalities” into the AI agents.

  • Risk-averse users trusted less.
  • Aggressive developers defected more.
  • Strict regulators improved compliance but only to a point.

Interestingly, setting specific personalities made LLM behaviors across GPT-4o and Mistral more similar. Without personalities, the AI agents defaulted to a more “pessimistic” worldview, often assuming that developers and regulators wouldn’t act in good faith.

So can AI regulate AI?

In short: only if the environment is already trusting, transparent, and well-incentivized.

The study suggests that regulation systems relying on AI agents themselves may inherit the messiness and unpredictability of human strategic behavior. It also points to a critical flaw in the idea of automating governance: AI systems will mirror the trust structures of the environment they’re placed in.

If regulators are underfunded or weak, or if users are skeptical, AI developers, human or not, will likely cut corners. Ultimately, the researchers argue that technical solutions alone won’t build trustworthy AI ecosystems. Game theory shows us that incentives, reputations, and transparency matter deeply. And their experiments show that even the smartest LLMs can’t escape those dynamics.

Their warning to policymakers is clear: regulation isn’t just about writing rules. It’s about building structures where trust is rewarded, enforcement is credible, and cutting corners is costly.


Featured image credit

Tags: AIregulation

Related Posts

Yubico survey: 62% of Gen Z engaged with phishing scams

Yubico survey: 62% of Gen Z engaged with phishing scams

October 6, 2025
High-resolution computer mice can listen to conversations through desk vibrations

High-resolution computer mice can listen to conversations through desk vibrations

October 6, 2025
Researchers make breakthrough in semiconductor technology set to supercharge 6G

Researchers make breakthrough in semiconductor technology set to supercharge 6G

October 3, 2025
Light-powered chip makes AI computation 100 times more efficient

Light-powered chip makes AI computation 100 times more efficient

October 3, 2025
Researchers combine OLEDs and metasurfaces to advance holographic displays

Researchers combine OLEDs and metasurfaces to advance holographic displays

October 1, 2025
Diraq, Imec show 99% fidelity in silicon qubit production

Diraq, Imec show 99% fidelity in silicon qubit production

October 1, 2025

LATEST NEWS

Shinyhunters extorts Red Hat over stolen CER data

CPAP breach exposes data of 90k military members

Windows 11 test build blocks local account bypass

Excel gets AI agent mode for automated data tasks

What is new at iOS 26.1 beta 2?

ChatGPT reaches 800m weekly active users

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.