Tesla announced its next-generation AI5 chip during its third-quarter 2025 earnings call, detailing a component with up to 40 times the performance of its predecessor. The chip will be manufactured by both Samsung and TSMC in their U.S. facilities.
The AI5 chip, which was first shown at the company’s 2024 shareholder meeting, was described by CEO Elon Musk on the earnings call as “an amazing design.” He explained that the performance increase is a direct result of the company’s deep integration of hardware and software. “By some metrics, the AI5 chip will be 40x better than the AI4 chip,” Musk stated, attributing the improvement to the chip’s custom optimization for Tesla’s specific applications across its product lines.
The new architecture delivers significant upgrades over the AI4, including eight times more raw compute power, nine times more memory, and five times higher memory bandwidth. Tesla engineers achieved the 40-fold performance gain by re-architecting processes that created bottlenecks in the previous design. A key example is the handling of SoftMax operations, which required 40 emulation steps on the AI4 but will now run natively in only a few steps on the AI5. The chip also incorporates native support for mixed-precision models and sparse tensor operations, which are tailored to handle real-world artificial intelligence workloads more efficiently.
Tesla’s manufacturing strategy for the AI5 involves a dual-foundry approach, a shift from earlier plans. The original roadmap had designated TSMC as the sole producer for AI5, with Samsung slated to manufacture the future AI6 chip. By engaging both Samsung and TSMC for AI5 production, Tesla aims to build supply chain resilience. “Our explicit goal is to have an oversupply of AI5 chips,” Musk said. He elaborated that any chips not installed in vehicles or the company’s Optimus robot will be repurposed to expand Tesla’s data center operations.
This internal chip development supplements, but does not replace, the company’s existing hardware partnerships. Musk clarified that Tesla does not have plans to replace Nvidia as a hardware provider for its data centers. Instead, Tesla will use its AI5 chips “in conjunction with” Nvidia’s systems to augment its computational capacity. Currently, Tesla’s data centers operate with computing power equivalent to 81,000 Nvidia H100 chips.
The design of the AI5 chip eliminates legacy components, such as traditional GPUs and image signal processors. Musk explained that this streamlined design means the AI5 effectively becomes “a GPU” itself. He predicted this new architecture will result in the industry’s “best performance per watt, maybe by a factor of two or three, and the best performance per dollar for AI, maybe by a factor of 10.”