Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

MAGELLAN: The AI that teaches itself by predicting its own learning

To test MAGELLAN, researchers used an interactive AI environment called Little-Zoo, where an LLM agent had to learn various tasks—like recognizing objects, growing plants, and even interacting with animals

byKerem Gülen
February 12, 2025
in Research
Home Research

Large Language Models (LLMs) are getting smarter, but there’s one big problem: they don’t know how to learn efficiently. MAGELLAN is a new AI framework that mimics human learning by predicting its own progress—allowing it to navigate massive goal spaces without getting stuck on what’s too easy or too hard.

Developed by researchers from Inria and MIT, including Loris Gaven, Thomas Carta, Clément Romac, Cédric Colas, Sylvain Lamprier, Olivier Sigaud, and Pierre-Yves Oudeyer, the study “MAGELLAN: Metacognitive predictions of learning progress guide autotelic LLM agents in large goal spaces” introduces a framework that gives AI a metacognitive ability—essentially, the skill to predict how much it will improve by practicing a task. This lets AI prioritize learning goals in an open-ended way, much like humans do when tackling new skills.

AI doesn’t prioritize learning well

Traditional AI learning methods struggle in vast goal spaces. They either:

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

  1. Waste time on tasks they’ve already mastered, making slow progress.
  2. Attempt goals that are too difficult, leading to repeated failures.
  3. Require human-defined goal categories, which is inefficient and doesn’t scale.

Humans, on the other hand, instinctively seek out challenges that stretch their abilities without being impossible. MAGELLAN brings this human-like approach to LLM training.

How MAGELLAN works: Predicting progress, not just performance

Most AI training systems either:

  • Measure past performance (which doesn’t help with new goals).
  • Use fixed difficulty ratings (which don’t adapt to changing abilities).

MAGELLAN takes a smarter route. It dynamically estimates how much an AI will improve on a goal if it practices it. This allows AI models to select learning tasks that maximize progress rather than just attempt things randomly.

The method works through a process called Absolute Learning Progress (ALP)—tracking how much an AI improves on a given task over time. Using ALP, MAGELLAN clusters goals into meaningful categories without human intervention, letting AI generalize across related skills.


LLM performance scores are inflated: A new method shows the truth


Teaching AI to learn like a human

To test MAGELLAN, researchers used an interactive AI environment called Little-Zoo, where an LLM agent had to learn various tasks—like recognizing objects, growing plants, and even interacting with animals.

The results were clear:

  • AI trained with MAGELLAN outperformed all other methods, mastering more tasks faster.
  • It generalized better, meaning it could tackle new, unseen challenges more effectively.
  • It didn’t require human-labeled goal categories, proving its scalability.

By contrast, traditional learning approaches either plateaued early or required expert-defined goal groupings, making them rigid and inefficient.

Why this matters

MAGELLAN’s biggest breakthrough is self-directed learning. Instead of relying on human engineers to select goals, the AI can autonomously determine what to learn next based on its own progress. This shifts AI from being passively trained to actively improving itself, making it a transformative approach across multiple fields.

AI assistants can teach themselves new skills by identifying areas where they struggle, enhancing their ability to adapt without human intervention. In robotics, machines can refine their abilities by focusing on tasks with the highest learning potential, leading to more efficient and capable autonomous systems. In education, AI tutors can adjust lessons in real-time, not just based on past performance but on predicted improvement, offering a more personalized learning experience.

MAGELLAN proves that AI can think about its own learning, making it vastly more efficient in open-ended environments. The next step might bee xpanding this method beyond text-based goals into fields like robotics, scientific discovery, and even human education.


Featured image credit: Kerem Gülen/Ideogram

Tags: AIFeaturedllm

Related Posts

AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025
Psychopathia Machinalis and the path to “Artificial Sanity”

Psychopathia Machinalis and the path to “Artificial Sanity”

September 1, 2025
New research finds AI prefers content from other AIs

New research finds AI prefers content from other AIs

August 29, 2025

LATEST NEWS

UAE’s new K2 Think AI model jailbroken hours after release via transparent reasoning logs

YouTube Music redesigns its Now Playing screen on Android and iOS

EU’s Chat Control proposal will scan your WhatsApp and Signal messages if approved

Apple CarPlay vulnerability leaves vehicles exposed due to slow patch adoption

iPhone Air may spell doomsday for physical SIM cards

Barcelona startup Altan raises $2.5 million to democratize software development with AI agents

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.