Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

How physics-inspired AI is making our roads safer

Think of it like this: an AI system trying to navigate the world must strike a balance between gathering detailed information and acting efficiently. If the system is too complex, it becomes difficult to manage; if it's too simple, it may overlook critical risks.

byKerem Gülen
February 7, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

According to a study conducted by Michael Walters (Gaia Lab, Nuremberg, Germany), Rafael Kaufmann (Primordia Co., Cascais, Portugal), Justice Sefas (University of British Columbia, B.C., Canada), and Thomas Kopinski (Gaia Lab, Fachhochschule Sudwestfalen, Meschede, Germany), a new physics-inspired approach to AI safety could make multi-agent systems—such as autonomous vehicles—significantly safer.

Their paper, “Free Energy Risk Metrics for Systemically Safe AI: Gatekeeping Multi-Agent Study”, introduces a new risk measurement method that improves decision-making in AI systems by predicting risks ahead of time and taking preventive action.

What is the Free Energy Principle (FEP) and why does it matter?

At the heart of their research is the Free Energy Principle (FEP), a concept originally developed in physics. In simple terms, FEP helps explain how systems balance accuracy (energy) and simplicity (entropy) when making predictions.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Think of it like this: an AI system trying to navigate the world must strike a balance between gathering detailed information and acting efficiently. If the system is too complex, it becomes difficult to manage; if it’s too simple, it may overlook critical risks. The authors use this principle to create a new risk metric that avoids the need for vast amounts of data or overly complicated models, making AI safety more practical and transparent.


AI is learning to drive like a human—by watching you panic


Cumulative Risk Exposure (CRE) is a smarter way to measure risk

The researchers propose a new risk measurement system called Cumulative Risk Exposure (CRE).

How is CRE different?

  • Unlike traditional risk models, which rely on extensive world models, CRE lets stakeholders define what “safe” means by specifying preferred outcomes.
  • This makes decision-making transparent and flexible, as the system adapts to different environments and needs.
  • Instead of relying on excessive sensor data, CRE estimates risk through predictive simulations over short time frames.

CRE provides a more efficient and adaptable way to assess risk in AI-driven systems, reducing reliance on resource-intensive calculations.

Gatekeepers: AI that steps in before things go wrong

To apply the CRE metric in real-world scenarios, the researchers introduce gatekeepers—modules that monitor AI decisions and intervene when necessary.

How do gatekeepers work?

  • In the case of autonomous vehicles, gatekeepers constantly simulate possible future scenarios to determine risk.
  • If they detect an unsafe outcome, they override the vehicle’s current driving mode and switch it to a safer behavior.
  • This allows AI systems to anticipate dangers before they happen rather than reacting after the fact.

Simulating safer roads with autonomous vehicles

The study tested this model in a simulated driving environment. The researchers divided vehicles into two groups:

  • “Egos” – Vehicles monitored and controlled by gatekeepers.
  • “Alters” – Background vehicles with fixed, pre-set driving behavior.

In this highway simulation, some Ego vehicles were allowed to be controlled by gatekeepers, while others were not.

Key findings:

  1. Even when only a small number of vehicles were under gatekeeper control, overall road safety improved.
  2. Fewer collisions occurred, showing that proactive intervention made a measurable difference.
  3. Vehicles maintained high speeds when safe but switched to cautious driving when risk levels rose.

The results suggest that even partial adoption of gatekeeper-controlled AI could lead to safer traffic conditions without compromising efficiency. While the study focused on autonomous vehicles, the CRE and gatekeeper model could apply to many other AI-driven fields.

Potential applications include:

  • Robotics: Ensuring that AI-powered robots work safely alongside humans.
  • Financial trading systems: Predicting high-risk market movements and adjusting strategies.
  • Industrial automation: Preventing AI-controlled machinery from making unsafe decisions.

Featured image credit: Kerem Gülen/Midjourney

Tags: AIFeaturedphysics

Related Posts

Nature study projects 2B wearable health devices by 2050

Nature study projects 2B wearable health devices by 2050

January 7, 2026
DeepSeek introduces Manifold-Constrained Hyper-Connections for R2

DeepSeek introduces Manifold-Constrained Hyper-Connections for R2

January 6, 2026
Imperial College London develops AI to accelerate cardiac drug discovery

Imperial College London develops AI to accelerate cardiac drug discovery

January 5, 2026
DarkSpectre malware infects 8.8 million users via browser extensions

DarkSpectre malware infects 8.8 million users via browser extensions

January 2, 2026
CMU researchers develop self-moving objects powered by AI

CMU researchers develop self-moving objects powered by AI

December 31, 2025
Glean’s Work AI Institute identifies 5 core AI tensions

Glean’s Work AI Institute identifies 5 core AI tensions

December 31, 2025

LATEST NEWS

Xbox Developer Direct returns January 22 with Fable and Forza Horizon 6

Dell debuts disaggregated infrastructure for modern data centers

TikTok scores partnership with FIFA for World Cup highlights

YouTube now lets you hide Shorts in search results

Google transforms Gmail with AI Inbox and natural language search

Disney+ to launch TikTok-style short-form video feed in the US

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.