Researchers at the USC Viterbi School of Engineering have developed artificial neurons that physically replicate the complex electrochemical behavior of biological brain cells. This innovation, documented in the journal Nature Electronics, is a major leap for neuromorphic (brain-inspired) computing. Unlike existing chips that just *simulate* brain activity with math, these new artificial neurons physically *emulate* it. This matters because the new design is a fraction of the size, uses far less energy, and could be a critical step toward building artificial general intelligence (AGI).
The project, led by USC Professor Joshua Yang, introduces a new device called a diffusive memristor. Here’s the key difference: nearly all modern computers, from your phone to a supercomputer, are built on silicon technology that works by shuffling electrons. This is incredibly fast, but it’s also incredibly energy-hungry, a massive problem for running today’s giant AI models.
The human brain doesn’t use electrons in the same way. It’s “wetware.” It runs on a mix of electrical and chemical signals. When a signal reaches the end of a neuron (the synapse), it’s converted into chemicals (ions like potassium or sodium) that physically travel to the next neuron, passing on the information.
Professor Yang’s team successfully emulated this physical process. Instead of silicon, their device uses silver ions in an oxide. The silver atoms physically move to generate an electrical pulse, mimicking the ion dynamics of the brain. “Even though they’re not exactly the same ions… the physics governing the ion motion and the dynamics are very similar,” Yang says. “The brain learns by moving ions across membranes, achieving energy-efficient and adaptive learning directly in hardware.”
Why ‘wetware’ is smarter than hardware
The advantage of this brain-like approach isn’t speed; electrons are still faster. The advantage is efficiency. The human brain can learn to recognize handwritten digits after seeing just a few examples, all while consuming only about 20 watts of power. A supercomputer needs thousands of examples and consumes megawatts of power to do the same task.
Yang explains that electrons are “lightweight and volatile,” which is great for software-based learning but terrible for efficiency. Ions, being heavier, create more persistent, hardware-level changes, which is how the brain actually learns. This new method is one step closer to mimicking that natural intelligence.
A massive leap in efficiency
The payoffs for this new design are enormous. In conventional chip design, faking a single neuron requires tens or even hundreds of transistors. The new diffusive memristor design requires the space of just one transistor. “We are designing the building blocks that will eventually lead us to reduce the chip size by orders of magnitude and reduce the energy consumption by orders of magnitude,” Yang explains.
There are still hurdles. The silver used in the experiment isn’t compatible with standard semiconductor manufacturing, so the team will need to investigate other materials. But the proof-of-concept is a breakthrough. With these new, compact building blocks—artificial synapses and neurons—the next step is to integrate millions of them onto a chip. “Even more exciting,” Yang concludes, “is the prospect that such brain-faithful systems could help us uncover new insights into how the brain itself works.”




