Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

How physics-inspired AI is making our roads safer

Think of it like this: an AI system trying to navigate the world must strike a balance between gathering detailed information and acting efficiently. If the system is too complex, it becomes difficult to manage; if it's too simple, it may overlook critical risks.

byKerem Gülen
February 7, 2025
in Research
Home Research

According to a study conducted by Michael Walters (Gaia Lab, Nuremberg, Germany), Rafael Kaufmann (Primordia Co., Cascais, Portugal), Justice Sefas (University of British Columbia, B.C., Canada), and Thomas Kopinski (Gaia Lab, Fachhochschule Sudwestfalen, Meschede, Germany), a new physics-inspired approach to AI safety could make multi-agent systems—such as autonomous vehicles—significantly safer.

Their paper, “Free Energy Risk Metrics for Systemically Safe AI: Gatekeeping Multi-Agent Study”, introduces a new risk measurement method that improves decision-making in AI systems by predicting risks ahead of time and taking preventive action.

What is the Free Energy Principle (FEP) and why does it matter?

At the heart of their research is the Free Energy Principle (FEP), a concept originally developed in physics. In simple terms, FEP helps explain how systems balance accuracy (energy) and simplicity (entropy) when making predictions.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Think of it like this: an AI system trying to navigate the world must strike a balance between gathering detailed information and acting efficiently. If the system is too complex, it becomes difficult to manage; if it’s too simple, it may overlook critical risks. The authors use this principle to create a new risk metric that avoids the need for vast amounts of data or overly complicated models, making AI safety more practical and transparent.


AI is learning to drive like a human—by watching you panic


Cumulative Risk Exposure (CRE) is a smarter way to measure risk

The researchers propose a new risk measurement system called Cumulative Risk Exposure (CRE).

How is CRE different?

  • Unlike traditional risk models, which rely on extensive world models, CRE lets stakeholders define what “safe” means by specifying preferred outcomes.
  • This makes decision-making transparent and flexible, as the system adapts to different environments and needs.
  • Instead of relying on excessive sensor data, CRE estimates risk through predictive simulations over short time frames.

CRE provides a more efficient and adaptable way to assess risk in AI-driven systems, reducing reliance on resource-intensive calculations.

Gatekeepers: AI that steps in before things go wrong

To apply the CRE metric in real-world scenarios, the researchers introduce gatekeepers—modules that monitor AI decisions and intervene when necessary.

How do gatekeepers work?

  • In the case of autonomous vehicles, gatekeepers constantly simulate possible future scenarios to determine risk.
  • If they detect an unsafe outcome, they override the vehicle’s current driving mode and switch it to a safer behavior.
  • This allows AI systems to anticipate dangers before they happen rather than reacting after the fact.

Simulating safer roads with autonomous vehicles

The study tested this model in a simulated driving environment. The researchers divided vehicles into two groups:

  • “Egos” – Vehicles monitored and controlled by gatekeepers.
  • “Alters” – Background vehicles with fixed, pre-set driving behavior.

In this highway simulation, some Ego vehicles were allowed to be controlled by gatekeepers, while others were not.

Key findings:

  1. Even when only a small number of vehicles were under gatekeeper control, overall road safety improved.
  2. Fewer collisions occurred, showing that proactive intervention made a measurable difference.
  3. Vehicles maintained high speeds when safe but switched to cautious driving when risk levels rose.

The results suggest that even partial adoption of gatekeeper-controlled AI could lead to safer traffic conditions without compromising efficiency. While the study focused on autonomous vehicles, the CRE and gatekeeper model could apply to many other AI-driven fields.

Potential applications include:

  • Robotics: Ensuring that AI-powered robots work safely alongside humans.
  • Financial trading systems: Predicting high-risk market movements and adjusting strategies.
  • Industrial automation: Preventing AI-controlled machinery from making unsafe decisions.

Featured image credit: Kerem Gülen/Midjourney

Tags: AIFeaturedphysics

Related Posts

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.