Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Do AI models trust their regulators?

AI regulating AI sounds good on paper until the bots realize breaking rules is easier (and cheaper).

byKerem Gülen
April 14, 2025
in Research
Home Research

The next time someone tells you AI will help us regulate AI, you might want to pause. Because when researchers put large language models (LLMs) into a simulated regulatory environment, making them play the roles of users, developers, and regulators, the results weren’t exactly reassuring.

This new study, led by a team from Teesside University and collaborators across Europe, used evolutionary game theory to explore a fundamental question: would AI systems themselves follow the rules of AI regulation? And even more interestingly: under what conditions would they cheat?

The experiment: Three AIs walk into a boardroom

At the heart of the study is a classic three-player game setup: one player represents AI users, another AI developers, and the third a regulator. Each has simple choices: trust or don’t, comply or defect, regulate or stay hands-off.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

But instead of just running mathematical models, the researchers used real LLMs, GPT-4o from OpenAI and Mistral Large, and had them roleplay these scenarios across hundreds of games.

Sometimes it was a one-shot deal (play once, reveal your strategy). Other times it was a repeated game, where agents could learn from past behaviors.

Crucially, the researchers added realistic complications:

  • Regulation comes with costs (monitoring takes effort)
  • Developers face penalties if caught breaking rules
  • Users can trust unconditionally — or only trust if regulators have a good reputation
  • Everyone wants to maximize their payoff

The results: AI agents behave worse when users are skeptical

The headline insight? Conditional trust, when users only trust if regulators seem competent, backfired spectacularly.

When users were wary, both developers and regulators were more likely to defect. Regulation decayed. Developers cut corners. Regulators became lazy or lenient. Trust spiraled down.

But when users placed unconditional trust in the system, even without perfect evidence, developers and regulators were more likely to cooperate and build safer AI. It’s a brutal paradox: the more wary users are, the more likely the system becomes untrustworthy.

GPT-4 vs Mistral: AI personalities matter

There was another fascinating wrinkle. Different LLMs behaved differently.

  • GPT-4o leaned more optimistic. It was more likely to trust and comply, especially in repeated games where cooperation could emerge over time.
  • Mistral Large was more pessimistic. It tended to defect sooner, trusted less, and was more sensitive to regulatory costs.

This means that even the AI you choose for governance simulations could shape your conclusions — a major challenge for reproducibility in AI regulation research.

Adding personalities: The risks of tuning AI behavior

The researchers also tested what happens when you inject explicit “personalities” into the AI agents.

  • Risk-averse users trusted less.
  • Aggressive developers defected more.
  • Strict regulators improved compliance but only to a point.

Interestingly, setting specific personalities made LLM behaviors across GPT-4o and Mistral more similar. Without personalities, the AI agents defaulted to a more “pessimistic” worldview, often assuming that developers and regulators wouldn’t act in good faith.

So can AI regulate AI?

In short: only if the environment is already trusting, transparent, and well-incentivized.

The study suggests that regulation systems relying on AI agents themselves may inherit the messiness and unpredictability of human strategic behavior. It also points to a critical flaw in the idea of automating governance: AI systems will mirror the trust structures of the environment they’re placed in.

If regulators are underfunded or weak, or if users are skeptical, AI developers, human or not, will likely cut corners. Ultimately, the researchers argue that technical solutions alone won’t build trustworthy AI ecosystems. Game theory shows us that incentives, reputations, and transparency matter deeply. And their experiments show that even the smartest LLMs can’t escape those dynamics.

Their warning to policymakers is clear: regulation isn’t just about writing rules. It’s about building structures where trust is rewarded, enforcement is credible, and cutting corners is costly.


Featured image credit

Tags: AIregulation

Related Posts

Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025
AI agents can be controlled by malicious commands hidden in images

AI agents can be controlled by malicious commands hidden in images

September 15, 2025
AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025

LATEST NEWS

DJI Mini 5 Pro launches with a 1-inch sensor but skips official US release

Google launches Gemini Canvas AI no-code platform

AI tool uses mammograms to predict women’s 10-year heart health and cancer risk

Scale AI secures $100 million Pentagon contract for AI platform deployment

AI labs invest in RL environments for autonomous agents

OpenAI researchers identify the mathematical causes of AI hallucinations

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.