The next time someone tells you AI will help us regulate AI, you might want to pause. Because when researchers put large language models (LLMs) into a simulated regulatory environment, making them play the roles of users, developers, and regulators, the results weren’t exactly reassuring.
This new study, led by a team from Teesside University and collaborators across Europe, used evolutionary game theory to explore a fundamental question: would AI systems themselves follow the rules of AI regulation? And even more interestingly: under what conditions would they cheat?
The experiment: Three AIs walk into a boardroom
At the heart of the study is a classic three-player game setup: one player represents AI users, another AI developers, and the third a regulator. Each has simple choices: trust or don’t, comply or defect, regulate or stay hands-off.
But instead of just running mathematical models, the researchers used real LLMs, GPT-4o from OpenAI and Mistral Large, and had them roleplay these scenarios across hundreds of games.
Sometimes it was a one-shot deal (play once, reveal your strategy). Other times it was a repeated game, where agents could learn from past behaviors.
Crucially, the researchers added realistic complications:
- Regulation comes with costs (monitoring takes effort)
- Developers face penalties if caught breaking rules
- Users can trust unconditionally — or only trust if regulators have a good reputation
- Everyone wants to maximize their payoff
The results: AI agents behave worse when users are skeptical
The headline insight? Conditional trust, when users only trust if regulators seem competent, backfired spectacularly.
When users were wary, both developers and regulators were more likely to defect. Regulation decayed. Developers cut corners. Regulators became lazy or lenient. Trust spiraled down.
But when users placed unconditional trust in the system, even without perfect evidence, developers and regulators were more likely to cooperate and build safer AI. It’s a brutal paradox: the more wary users are, the more likely the system becomes untrustworthy.
GPT-4 vs Mistral: AI personalities matter
There was another fascinating wrinkle. Different LLMs behaved differently.
- GPT-4o leaned more optimistic. It was more likely to trust and comply, especially in repeated games where cooperation could emerge over time.
- Mistral Large was more pessimistic. It tended to defect sooner, trusted less, and was more sensitive to regulatory costs.
This means that even the AI you choose for governance simulations could shape your conclusions — a major challenge for reproducibility in AI regulation research.
Adding personalities: The risks of tuning AI behavior
The researchers also tested what happens when you inject explicit “personalities” into the AI agents.
- Risk-averse users trusted less.
- Aggressive developers defected more.
- Strict regulators improved compliance but only to a point.
Interestingly, setting specific personalities made LLM behaviors across GPT-4o and Mistral more similar. Without personalities, the AI agents defaulted to a more “pessimistic” worldview, often assuming that developers and regulators wouldn’t act in good faith.
So can AI regulate AI?
In short: only if the environment is already trusting, transparent, and well-incentivized.
The study suggests that regulation systems relying on AI agents themselves may inherit the messiness and unpredictability of human strategic behavior. It also points to a critical flaw in the idea of automating governance: AI systems will mirror the trust structures of the environment they’re placed in.
If regulators are underfunded or weak, or if users are skeptical, AI developers, human or not, will likely cut corners. Ultimately, the researchers argue that technical solutions alone won’t build trustworthy AI ecosystems. Game theory shows us that incentives, reputations, and transparency matter deeply. And their experiments show that even the smartest LLMs can’t escape those dynamics.
Their warning to policymakers is clear: regulation isn’t just about writing rules. It’s about building structures where trust is rewarded, enforcement is credible, and cutting corners is costly.