Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Do AI models trust their regulators?

AI regulating AI sounds good on paper until the bots realize breaking rules is easier (and cheaper).

byKerem Gülen
April 14, 2025
in Research

The next time someone tells you AI will help us regulate AI, you might want to pause. Because when researchers put large language models (LLMs) into a simulated regulatory environment, making them play the roles of users, developers, and regulators, the results weren’t exactly reassuring.

This new study, led by a team from Teesside University and collaborators across Europe, used evolutionary game theory to explore a fundamental question: would AI systems themselves follow the rules of AI regulation? And even more interestingly: under what conditions would they cheat?

The experiment: Three AIs walk into a boardroom

At the heart of the study is a classic three-player game setup: one player represents AI users, another AI developers, and the third a regulator. Each has simple choices: trust or don’t, comply or defect, regulate or stay hands-off.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

But instead of just running mathematical models, the researchers used real LLMs, GPT-4o from OpenAI and Mistral Large, and had them roleplay these scenarios across hundreds of games.

Sometimes it was a one-shot deal (play once, reveal your strategy). Other times it was a repeated game, where agents could learn from past behaviors.

Crucially, the researchers added realistic complications:

  • Regulation comes with costs (monitoring takes effort)
  • Developers face penalties if caught breaking rules
  • Users can trust unconditionally — or only trust if regulators have a good reputation
  • Everyone wants to maximize their payoff

The results: AI agents behave worse when users are skeptical

The headline insight? Conditional trust, when users only trust if regulators seem competent, backfired spectacularly.

When users were wary, both developers and regulators were more likely to defect. Regulation decayed. Developers cut corners. Regulators became lazy or lenient. Trust spiraled down.

But when users placed unconditional trust in the system, even without perfect evidence, developers and regulators were more likely to cooperate and build safer AI. It’s a brutal paradox: the more wary users are, the more likely the system becomes untrustworthy.

GPT-4 vs Mistral: AI personalities matter

There was another fascinating wrinkle. Different LLMs behaved differently.

  • GPT-4o leaned more optimistic. It was more likely to trust and comply, especially in repeated games where cooperation could emerge over time.
  • Mistral Large was more pessimistic. It tended to defect sooner, trusted less, and was more sensitive to regulatory costs.

This means that even the AI you choose for governance simulations could shape your conclusions — a major challenge for reproducibility in AI regulation research.

Adding personalities: The risks of tuning AI behavior

The researchers also tested what happens when you inject explicit “personalities” into the AI agents.

  • Risk-averse users trusted less.
  • Aggressive developers defected more.
  • Strict regulators improved compliance but only to a point.

Interestingly, setting specific personalities made LLM behaviors across GPT-4o and Mistral more similar. Without personalities, the AI agents defaulted to a more “pessimistic” worldview, often assuming that developers and regulators wouldn’t act in good faith.

So can AI regulate AI?

In short: only if the environment is already trusting, transparent, and well-incentivized.

The study suggests that regulation systems relying on AI agents themselves may inherit the messiness and unpredictability of human strategic behavior. It also points to a critical flaw in the idea of automating governance: AI systems will mirror the trust structures of the environment they’re placed in.

If regulators are underfunded or weak, or if users are skeptical, AI developers, human or not, will likely cut corners. Ultimately, the researchers argue that technical solutions alone won’t build trustworthy AI ecosystems. Game theory shows us that incentives, reputations, and transparency matter deeply. And their experiments show that even the smartest LLMs can’t escape those dynamics.

Their warning to policymakers is clear: regulation isn’t just about writing rules. It’s about building structures where trust is rewarded, enforcement is credible, and cutting corners is costly.


Featured image credit

Tags: AIregulation

Related Posts

Forget seeing dark matter, it’s time to listen for it

Forget seeing dark matter, it’s time to listen for it

October 28, 2025
Google’s search business could lose  billion a year to ChatGPT

Google’s search business could lose $30 billion a year to ChatGPT

October 27, 2025
AI helps decode the epigenetic ‘off-switch’ in an ugly plant that lives for 3,000 years

AI helps decode the epigenetic ‘off-switch’ in an ugly plant that lives for 3,000 years

October 27, 2025
Researchers warn that LLMs can get “brain rot” too

Researchers warn that LLMs can get “brain rot” too

October 24, 2025
Cyberattacks are now killing patients not just crashing systems

Cyberattacks are now killing patients not just crashing systems

October 21, 2025
Gen Z workers are telling AI things they’ve never told a human

Gen Z workers are telling AI things they’ve never told a human

October 20, 2025

LATEST NEWS

183M Gmail passwords exposed via infostealer malware

Google’s AI health coach debuts for Android Fitbit users

Grokipedia’s “AI-verified” pages show little change from Wikipedia

OpenAI data reveals 0.15% of ChatGPT users express suicidal thoughts

OpenAI makes ChatGPT Go free across India

Goodbye: Pixel Watch gets its final update

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.