Could machine learning’s escalating costs and carbon footprint be reduced by using analog AI hardware instead of digital to tap into quick, low-power processing?
Researchers Logan Wright and Tatsuhiro Onodera from Cornell University and NTT Research foresee a time when machine learning (ML) will be carried out using cutting-edge physical hardware, such as photonic or nanomechanical systems. They claim these innovative gadgets might be used in server and edge contexts.
The foundation of current AI initiatives, deep neural networks, depends heavily on digital processors like GPUs. However, worries about the financial and environmental costs of machine learning have been for a while, which is gradually limiting the scalability of deep learning models.
Several popular big AI models were trained using a life cycle evaluation in a 2019 study from the University of Massachusetts, Amherst. It was discovered that the procedure has the potential to produce more than 626,000 pounds of carbon dioxide equivalent, or approximately five times the lifetime emissions of the typical American automobile, including those produced during the car’s production.
On July 19, CEO Kazu Gomi of NTT Research stated that machine learning doesn’t need to rely on digital circuits and can instead function on a physical neural network during a session at VentureBeat Transform’s Executive Summit. In contrast to software-based methods, actual analog hardware is employed in the form of an artificial neural network to simulate neurons.
“One of the obvious benefits of using analog systems rather than digital is AI’s energy consumption. The consumption issue is real, so the question is what are new ways to make machine learning faster and more energy-efficient,” Gomi said.
Analog AI aims to be much more efficient at performing calculations
Wright emphasized that people were not attempting to think about how to create digital computers in the early history of AI.
“They were trying to think about how we could emulate the brain, which of course is not digital. What I have in my head is an analog system, and it’s actually much more efficient at performing the types of calculations that go on in deep neural networks than today’s digital logic circuits,” he explained.
One example of analog hardware for artificial intelligence is the brain, but there are also systems that employ optics.
The environmental impact of AI makes regulations vital for a sustainable future
“My favorite example is waves, because a lot of things like optics are based on waves. In a bathtub, for instance, you could formulate the problem to encode a set of numbers. At the front of the bathtub, you can set up a wave and the height of the wave gives you this vector X. You let the system evolve for some time, and the wave propagates to the other end of the bathtub. After some time you can then measure the height of that, and that gives you another set of numbers,” he stated.
In essence, nature is capable of computation. Additionally, “you don’t need to plug it into anything,” he said.
Analog AI ventures
Researchers are using diverse strategies to create analog AI technology. For instance, IBM Research has invested in analog circuits, particularly memristor technology, for machine learning computations.
“It’s quite promising. These memristor circuits have the property of having information be naturally computed by nature as the electrons ‘flow’ through the circuit, allowing them to have potentially much lower energy consumption than digital electronics,” Onodera explained.
NTT Research, on the other hand, is concentrating on a broader framework that isn’t just for memristor technology.
Using data to bring down healthcare costs
“Our work is focused on also enabling other physical systems, for instance, those based on light and mechanics (sound), to perform machine learning. By doing so, we can make smart sensors in the native physical domain where the information is generated, such as in the case of a smart microphone or a smart camera,” he stated.
For instance, Mythic and other startups concentrate on analog AI utilizing circuits, which Wright calls a “great step” and “probably the lowest risk way to get into analog neural networks. “There is only so much improvement in performance that is possible if the hardware is still based on electronics.”
What will be the future of analog AI?
Several companies are working on analog AI, including LightMatter, Lightelligence, and Luminous Computing. They employ photonics—the computing process with light—instead of electronics. Wright argued that this technology is less advanced and more dangerous.
“But the long-term potential is much more exciting. Light-based neural networks could be much more energy-efficient,” he said.
However, he said, other materials can be used to create computers, particularly for artificial intelligence.
“You could make it out of biological materials, electrochemistry (like our own brains), or out of fluids, acoustic waves (sound), or mechanical objects, modernizing the earliest mechanical computers,” he added.
By repeatedly repeating arrays of programmable resistors in intricate layers, MIT Research, for instance, revealed last week that it had developed new protonic programmable resistors, a network of analog artificial neurons and synapses that can perform calculations similar to a digital neural network. This was a milestone for the future of analog AI.
According to NTT Research, it is moving away from these strategies and posing far more expansive, long-term queries.
Qudit computers open endless possibilities by exceeding the binary system
“Our paper provides the first answer to these questions by telling us how we can make a neural network computer using any physical substrate. And so far, our calculations suggest that making these weird computers will one day soon actually make a lot of sense, since they can be much more efficient than digital electronics, and even analog electronics. Light-based neural network computers seem like the best approach so far, but even that question isn’t completely answered,” Logan said.
Analog AI is not the only choice though
The AI business is “in this really interesting hardware stage,” according to Sara Hooker, a former Google Brain researcher who presently oversees the nonprofit research facility Cohere for AI.
She emphasizes that the enormous advancement in AI that occurred ten years ago was a hardware development.
“Deep neural networks did not work until GPUs, which were used for video games [and] were just repurposed for deep neural networks,” she explained.
“The change was almost instantaneous. Overnight, what took 13,000 CPUs overnight took two GPUs. That was how dramatic it was,” she said.
She asserted that there are probably alternative methods to describe the world that may be as effective as digital.
“If even one of these data directions starts to show progress, it can unlock many efficiencies and different ways of learning representations. That’s what makes it worthwhile for labs to back them,” she explained.
The success of GPUs for deep neural networks was “actually a bizarre, lucky coincidence – it was winning the lottery,” according to Hooker, whose 2020 article “The Hardware Lottery” examined why different hardware tools have succeeded and failed.
GPUs, she said, were created for video games and were never intended for machine learning. The ideal time of alignment between hardware and modeling advancements “depended upon the right moment of alignment between progress on the hardware side and progress on the modeling side. Making more hardware options available is the most important ingredient because it allows for more unexpected moments where you see those breakthroughs,” she said.
How ensembles can reduce machine learning’s carbon footprint
However, analog AI isn’t the only solution under consideration when it comes to lowering the prices and carbon emissions of artificial intelligence. For instance, new tools are emerging to reduce AI’s carbon footprint. Field-programmable gate arrays (FPGAs), for example, are being banked on by researchers as application-specific accelerators in data centers that can lower energy consumption and boost operational speed. She said that there are also initiatives to enhance the software tech.
But chances must be taken, according to Hooker. When asked if she believed that the large tech giants supported analog and other forms of alternative non-digital AI futures, she stated, “One hundred percent. There is a clear motivation.”
“It’s always been tricky when investment rests solely on companies because it’s so risky,” she said. “It often has to be part of a nationalist strategy for it to be a compelling long-term bet.” She continued that consistent government investment in a long-term hardware environment is needed.
Although Hooker stated she wouldn’t bank on the broad use of analog AI hardware, she maintains that the research efforts benefit the ecosystem.
“It’s kind of like the initial NASA flight to the moon. There are so many scientific breakthroughs that happen just by having an objective. There’s an understanding among people in the field that there has to be some bet on riskier projects,” she said.
Near-term applications of analog AI
The NTT researchers explicitly said that their analog AI work’s earliest, most precise implementations won’t be available for at least 5 to 10 years. Even then, they’ll likely be used first for niche purposes like those at the edge.
“I think the most near-term applications will happen on edge, where there are fewer resources, where you might not have as much power. I think that’s where there is the most potential. The team is thinking about which types of physical systems will be the most scalable and offer the biggest advantage in terms of energy efficiency and speed. But in terms of entering the deep learning infrastructure, it will likely happen incrementally,” Wright said.
“I think it would just slowly come into the market, with a multilayered network with maybe the front end happening on the analog domain,” he said. “I think that’s a much more sustainable approach,” he added.