Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI-based MARL method improves cooperation between teams of robots

byKerem Gülen
July 26, 2022
in News, Artificial Intelligence
Home News

Researchers from the University of Illinois at Urbana-Champaign began with this more challenging task. They created a technique using multi-agent reinforcement learning (MARL), a form of artificial intelligence, to teach many agents to cooperate.

Individual agents, such as robots or drones, can cooperate and finish a task when communication channels are open. What happens, though, if their technology is insufficient or the signals are jammed, making communication impossible? There are lots of research going on to improve the efficiency of artificial intelligence systems, lately, it is found that the selective regression method improves AI accuracy.

MARL architecture enables multiple agents to solve complicated problems

“It’s easier when agents can talk to each other. But we wanted to do this in a way that’s decentralized, meaning that they don’t talk to each other. We also focused on situations where it’s not obvious what the different roles or jobs for the agents should be,” said Huy Tran, an aerospace engineer at Illinois.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researchers created a technique using multi-agent reinforcement learning (MARL), a form of artificial intelligence, to teach many agents to cooperate.
The MARL architecture has promise for using numerous agents to solve complicated problems.

Because it’s unclear what one agent should do in contrast to another agent, Tran claimed that this scenario is far more complicated and difficult.

“The interesting question is how do we learn to accomplish a task together over time,” said Tran.

The MARL architecture has promise for using numerous agents to solve complicated problems. Determining private utility functions that guarantee cooperation when training decentralized agents, however, is a significant difficulty in MARL. This problem is particularly common in unstructured activities with little rewards and numerous agents.

They tested their method in several MARL scenarios and then implemented it using a centralized training, and decentralized execution architecture. Their findings indicate that disentanglement of successor features offers a potential way for coordination in MARL, as evidenced by increased performance and training time compared to existing methods.

Researchers created a technique using multi-agent reinforcement learning (MARL), a form of artificial intelligence, to teach many agents to cooperate.
They tested their method in several MARL scenarios and then implemented it using a centralized training, and decentralized execution architecture.

By developing a utility function that alerts the agent when it is acting in a way that is beneficial to the team or useful, Tran and his colleagues employed machine learning to find a solution to this issue.

“With team goals, it’s hard to know who contributed to the win. We developed a machine learning technique that allows us to identify when an individual agent contributes to the global team objective. If you look at it in terms of sports, one soccer player may score, but we also want to know about actions by other teammates that led to the goal, like assists. It’s hard to understand these delayed effects,” explained Tran.

The MARL method can also spot when an agent or robot is acting in a way that isn’t helpful to the end result.

“It’s not so much the robot chose to do something wrong, just something that isn’t useful to the end goal,” he added.

Researchers created a technique using multi-agent reinforcement learning (MARL), a form of artificial intelligence, to teach many agents to cooperate.
The MARL method can also spot when an agent or robot is acting in a way that isn’t helpful to the end result.

They used simulated games like Capture the Flag and StarCraft, a well-known computer game, to evaluate their algorithms.

Watch Huy Tran demonstrate related research utilizing deep reinforcement learning to assist robots in determining their next move in the game of Capture the Flag:

“StarCraft can be a little bit more unpredictable — we were excited to see our method work well in this environment too,” said Tran.

According to Tran, this kind of algorithm is relevant to a wide range of real-world scenarios, including military surveillance, robot collaboration in a warehouse, traffic signal management, delivery coordination by autonomous vehicles, and grid control.

When Seung Hyun Kim was a mechanical engineering undergraduate student, according to Tran, he developed the majority of the theory underlying the concept; Neale Van Stralen, an aerospace undergraduate, assisted with the implementation. Both students received guidance from Tran and Girish Chowdhary. At the peer-reviewed conference on autonomous agents and multi-agent systems, the work was recently presented to the AI community. The latest studies showed that fake data improved the performance of robots by 40%.

Tags: AIartificial intelligenceMachine LearningMLroboticsrobots

Related Posts

Psychopathia Machinalis and the path to “Artificial Sanity”

Psychopathia Machinalis and the path to “Artificial Sanity”

September 1, 2025
GPT-4o Mini is fooled by psychology tactics

GPT-4o Mini is fooled by psychology tactics

September 1, 2025
AI reveals what doctors cannot see in coma patients

AI reveals what doctors cannot see in coma patients

September 1, 2025
Asian banks fight fraud with AI, ISO 20022

Asian banks fight fraud with AI, ISO 20022

September 1, 2025
Android 16 Pixel bug silences notifications

Android 16 Pixel bug silences notifications

September 1, 2025
Azure Integrated HSM hits every Microsoft server

Azure Integrated HSM hits every Microsoft server

September 1, 2025
Please login to join discussion

LATEST NEWS

Psychopathia Machinalis and the path to “Artificial Sanity”

GPT-4o Mini is fooled by psychology tactics

AI reveals what doctors cannot see in coma patients

Asian banks fight fraud with AI, ISO 20022

Android 16 Pixel bug silences notifications

Azure Integrated HSM hits every Microsoft server

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.