Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

MIT researchers have built an AI that teaches itself how to learn

MIT’s Improbable AI Lab unveiled SEAL, a framework that lets language models generate and learn from their own training data. SEAL enables models to permanently update their weights by studying their own “self-edits.”

byKerem Gülen
October 20, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Large language models like ChatGPT have a fundamental problem: they’re static. They are trained on a mountain of data and then frozen in time, like a textbook printed in 2023 that knows nothing about 2024. Now, researchers at MIT’s Improbable AI Lab have open-sourced a new framework that could change that. Their paper, presented at the recent NeurIPS 2025 conference, unveils a system called Self-Adapting Language Models (SEAL).

The core idea is simple, but the implications are huge: the AI learns to teach itself. Instead of just passively holding information, SEAL enables a model to generate its own high-quality training data and then use that data to permanently update its own weights. This matters because it’s the first real step away from static, “know-it-all” bots and toward AI models that can actually evolve, adapt, and incorporate new information over time.

Why AI models are bad students

Right now, if you want an LLM to learn a new fact, you have two bad options. You can “stuff” the information into its context window (the prompt), but it will forget that fact the moment the conversation resets. Or, you can perform a massive, expensive retraining, which is like reprinting an entire encyclopedia just to add a new entry. Neither of these methods is true learning.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The MIT team, including Adam Zweiger, Jyothish Pari, and Pulkit Agrawal, looked at how humans learn. When a student prepares for an exam, they don’t just re-read the textbook 50 times. A good student rewrites the information, making flashcards, summarizing chapters, and creating their own notes. This process of reformatting and assimilating information is what cements it in their brain.

SEAL is designed to be that good student. It learns to take the “raw textbook” of new information and generate its own “study notes”—which the paper calls “self-edits”—in whatever format is most effective for its own learning.

So, how does it learn to ‘study’?

It learns through trial and error, using a process called reinforcement learning. Think of it as an AI holding its own study sessions.

  1. Get the lesson: The AI is given a new piece of information (like a passage of text).
  2. Write the notes: It generates a “self-edit”—its own synthetic notes on that info. This could be a list of key implications, a set of question-and-answer pairs, or just a simple summary.
  3. Take the quiz: The AI is briefly fine-tuned on its own notes and then immediately given a pop quiz on the new information.
  4. Get the grade: If it passes the quiz, it gets a “reward.” This positive feedback teaches the model that the “self-edit” notes it just wrote were high-quality and effective.
  5. Study smarter: If it fails, it learns that its notes were bad and tries a different format next time. Over thousands of these loops, the AI doesn’t just learn the new facts; it learns how to learn new facts more efficiently.

And the results?

The researchers tested SEAL in two key areas, and the results are striking.

First, they tested its ability to incorporate new knowledge. They gave the model text passages and quizzed it on the contents. After training itself with SEAL, the AI’s accuracy jumped to 47.0%. Here’s the kicker: that score outperformed synthetic data generated by the much larger and more powerful GPT-4.1, which only scored 46.3%. The smaller model literally taught itself to be “smarter” than its massive competitor at this specific task.

Second, they tested its ability to learn a new skill from just a few examples. This is a notoriously hard abstract reasoning benchmark called ARC. SEAL’s job wasn’t just to solve the puzzle, but to generate the best learning strategy for itself (e.g., “use these data augmentations,” “set this learning rate”). The self-adapting AI found a successful strategy 72.5% of the time. The baseline model, without this self-learning, fumbled, succeeding only 20% of the time.

What’s the catch?

This all sounds great, but a pragmatist would be right to ask about the downsides. The researchers are transparent about the limitations.

  • Catastrophic forgetting: The model still suffers from the classic AI problem of “catastrophic forgetting.” As it crams for new exams, it starts to forget what it learned for the midterms. Learning a new fact can still overwrite old ones.
  • It’s painfully slow: This process is not fast. The researchers note that the computational overhead is “substantial.” It takes 30-45 seconds just to grade a single self-edit during the training loop.
  • It needs an answer key: The current system relies on having a “quiz” with correct answers to provide that all-important reward signal.

Despite these hurdles, the team is looking ahead. Experts project that we will run out of high-quality human-generated text to train AI on by 2028. When we hit that “data wall,” progress will hinge on a model’s ability to generate its own high-utility training data. This research is a crucial roadmap for how that might work, paving the way for future AI “agents” that don’t just answer your questions, but actively learn from their interactions with the world and get smarter every day.


Featured image credit

Tags: large language modelsMIT

Related Posts

Miggo Security bypasses Google Gemini defenses via calendar invites

Miggo Security bypasses Google Gemini defenses via calendar invites

January 21, 2026
JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

January 21, 2026
How AI built VoidLink malware in just seven days

How AI built VoidLink malware in just seven days

January 20, 2026
Forrester analyst: AI has failed to move the needle on global productivity

Forrester analyst: AI has failed to move the needle on global productivity

January 19, 2026
OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

January 19, 2026
Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026

LATEST NEWS

Substack goes for the living room with beta TV app launch

Google rolls out opt-in “Personal Intelligence” for AI Pro and Ultra users

JBL launches AI-powered BandBox amps

The billion-event problem: How data engineering powers 8-hour battery life in AR glasses

Influencer collaboration with brands: 15 real formats beyond the sponsored post

From fragmented systems to intelligent workflows: How CRM platforms like Salesforce power data-driven enterprise operations

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.