Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

China develops SpikingBrain1.0, a brain-inspired AI model

The new LLM mimics human localized attention, runs 25–100× faster than traditional AI, and operates without Nvidia GPUs.

byKerem Gülen
September 10, 2025
in Artificial Intelligence
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Chinese researchers from the Chinese Academy of Sciences have unveiled SpikingBrain1.0, described as the world’s first “brain-like” large language model (LLM). The model is designed to consume less power and operate independently of Nvidia GPUs, addressing limitations of conventional AI technologies.

Existing models, including ChatGPT and Meta’s Llama, rely on “attention,” a process that compares every word in a sentence to all others to predict the next word. While effective, this approach consumes large amounts of energy and slows processing for long texts, such as books.

Traditional models also depend heavily on Nvidia GPUs, creating hardware bottlenecks for scaling.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Brain-inspired approach

SpikingBrain1.0 uses a localized attention mechanism, focusing on nearby words rather than analyzing entire texts. This mimics the human brain’s ability to concentrate on recent context during conversations. Researchers claim this method allows the model to function 25 to 100 times faster than conventional LLMs while maintaining comparable accuracy.

The model runs on China’s homegrown MetaX chip platform, eliminating reliance on Nvidia GPUs. It selectively responds to input, reducing power consumption and enabling continual pre-training with less than 2% of the data needed by mainstream open-source models. The researchers note that in specific scenarios, SpikingBrain1.0 can achieve over 100 times the speed of traditional AI models.

The development of SpikingBrain1.0 comes amid U.S. technology export restrictions that limit China’s access to advanced chips required for AI and server applications. These restrictions have accelerated domestic AI innovation, with SpikingBrain1.0 representing a step toward a more self-sufficient AI ecosystem.


Featured image credit

Tags: Chinese Academy of SciencesSpikingBrain1.0

Related Posts

Samsung Bixby gains Perplexity AI search powers in new update

Samsung Bixby gains Perplexity AI search powers in new update

December 29, 2025
Google NotebookLM introduces “Lecture Mode” for 30-minute AI learning

Google NotebookLM introduces “Lecture Mode” for 30-minute AI learning

December 26, 2025
ChatGPT evolves into an office suite with new formatting blocks

ChatGPT evolves into an office suite with new formatting blocks

December 26, 2025
Alibaba’s Qwen Code v0.5.0 transforms terminal into a full dev ecosystem

Alibaba’s Qwen Code v0.5.0 transforms terminal into a full dev ecosystem

December 26, 2025
Google reveals “pill-shaped” button for persistent Gemini sessions

Google reveals “pill-shaped” button for persistent Gemini sessions

December 25, 2025
Pope Leo XIV prepares landmark “Magnifica Humanitas” encyclical on AI

Pope Leo XIV prepares landmark “Magnifica Humanitas” encyclical on AI

December 25, 2025

LATEST NEWS

Xiaomi 17 Ultra’s zoom ring play is normal

Analyst predicts Bitcoin stability over growth for Q1 2026

Stoxx 600 breaks record: European markets hit record high as miners rally

CachyOS challenges Ubuntu in new server benchmarks

HP leaks OMEN OLED gaming monitors ahead of CES 2026

Gallery TV joins LG lifestyle lineup with exclusive art service

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.