Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Why your brain might be the next blueprint for smarter AI

If you’ve heard of dropout—the popular regularization method where random neurons are deactivated during training to prevent overfitting—then you’ll appreciate the charm of its inverse: “dropin.”

byKerem Gülen
April 1, 2025
in Research
Home Research

Artificial intelligence has mastered many things—writing poems, driving cars, even predicting your next binge-watch. But there’s one thing it still struggles with: knowing when to grow, when to forget, and how to keep evolving over time. In other words, AI doesn’t do neuroplasticity. Yet.

That’s the argument a group of researchers is making in a new paper that takes inspiration directly from human biology. They propose a radical rethinking of how neural networks learn—not just by fine-tuning their weights or expanding parameters, but by borrowing tricks from how the brain rewires itself: through neurogenesis (growing new neurons), neuroapoptosis (strategically killing off others), and plasticity (doing both, adaptively). And if their ideas catch on, the next generation of AI might behave less like a calculator and more like, well, you.

Why does this matter now?

Modern neural networks, especially large language models, are more powerful than ever—but also rigid. Once trained, their architectures stay fixed. New data can be added, but the skeleton of the model remains unchanged. In contrast, the human brain constantly updates itself. We grow new neurons, prune out the unhelpful ones, and strengthen connections based on experience. That’s how we learn new skills without forgetting the old ones—and recover from setbacks.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The researchers argue this biological flexibility could be exactly what AI needs, especially for real-world, long-term tasks. Imagine a customer service chatbot that can evolve with new product lines or a medical AI that grows smarter with every patient it sees. These systems shouldn’t just re-train—they should rewire.

The dropin revolution: Letting AI grow new neurons

If you’ve heard of dropout—the popular regularization method where random neurons are deactivated during training to prevent overfitting—then you’ll appreciate the charm of its inverse: “dropin.”

Dropin is a term the researchers coined to describe the artificial equivalent of neurogenesis. The idea is simple: when a neural network hits a wall in learning, why not give it more capacity? Just as the brain grows new neurons in response to stimuli, a model can spawn new neurons and connections when it struggles with a task. Think of it as AI with a growth spurt.

The paper even proposes an algorithm: if the model’s loss function stagnates (meaning it’s learning little), dropin activates, adding fresh neurons selectively. These neurons don’t just get tossed in blindly. They’re placed where the model shows signs of high stress or underperformance. In essence, the network is given room to breathe and adapt.

And sometimes, AI needs to forget

Just as crucial as growth is pruning. Neuroapoptosis—the brain’s self-destruct button for underperforming neurons—has its digital analogues too. Dropout is one. Structural pruning, where entire neurons or connections are permanently deleted, is another.

The researchers detail how various dropout strategies mirror this selective forgetting. From adaptive dropout (which changes the dropout rate based on a neuron’s usefulness) to advanced forms like Concrete or Variational Dropout (which learn which neurons to kill during training), the AI world is already halfway toward mimicking apoptosis.

And structural pruning? It’s even more hardcore. Once a neuron is deemed useless, it’s gone. This isn’t just good for efficiency—it can also reduce overfitting, speed up inference, and save energy. But pruning needs to be done with surgical precision. Overdo it, and you risk “layer collapse”—a model that forgets too much to function.


This AI learns to click better than you


Here’s where things get exciting. Real brains don’t just grow or prune—they do both, all the time, in response to learning. That’s neuroplasticity. And AI could use a dose of it.

The researchers propose combining dropin and dropout in a continuous loop. As models receive new data or face new tasks, they dynamically expand or contract—just like your brain adapting to a new language or recovering from injury. They even present an algorithm that uses learning rate changes and model feedback to decide when to grow, when to shrink, and when to stay put.

This isn’t science fiction. Similar ideas are already creeping into AI: adapter-based fine-tuning like LoRA, dynamic layer expansion in LLMs, and continual learning frameworks all point in this direction. But what’s missing is a unifying framework that ties these methods back to biology—and systematizes when and how to adapt.

Dynamic networks aren’t easy to manage. Adding and deleting neurons during training complicates debugging, makes error tracing harder, and risks instability. And unlike biological brains, which have millions of years of evolution on their side, neural networks have only a few lines of code and some heuristics.

There’s also the problem of measuring success. When is a new neuron helpful? When is it just noise? And how do you balance short-term learning with long-term memory—a challenge even humans haven’t fully solved?

A new blueprint for AI and for us

Despite the hurdles, the vision is compelling. AI that doesn’t just learn—it evolves. AI that knows when to forget. That expands when challenged. That adapts like a living system, not a frozen codebase.

What’s more, the feedback loop between neuroscience and AI could go both ways. The more we build models inspired by the brain, the more we might learn about how our own minds work. And someday, AI might help us unlock deeper secrets of cognition, memory, and adaptation.

So, the next time you forget where you left your keys—or learn a new skill—remember: your brain is doing what today’s smartest AI is just beginning to grasp. And if researchers have their way, your forgetful, adaptable, plastic brain might just be the gold standard for the machines of tomorrow.


Featured image credit

Tags: AIFeatured

Related Posts

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.