Arm announced on Monday that its Neoverse CPUs will integrate with Nvidia’s AI chips through NVLink Fusion technology, enabling hyperscalers to pair Arm-based processors with Nvidia graphics processing units in custom infrastructure setups.
The integration simplifies the process for customers preferring tailored infrastructure, particularly hyperscalers, to combine Arm-based Neoverse CPUs directly with Nvidia’s dominant GPUs. This development stems from Arm’s statement that central processing units based on its technology will connect using Nvidia’s NVLink Fusion. Hyperscalers, large-scale cloud operators, often design custom systems to optimize performance and costs in data centers supporting AI workloads.
Nvidia employs partnerships across the technology sector amid its pivotal position in the AI industry. The announcement indicates Nvidia opening its NVLink platform to various custom chips, rather than requiring customers to adopt its own CPUs. Nvidia currently markets Grace Blackwell, an AI product that links multiple GPUs with an Nvidia-branded Arm-based CPU. Separate server configurations incorporate CPUs from Intel or Advanced Micro Devices, providing options for diverse hardware combinations in AI environments.
Microsoft, Amazon, and Google develop or deploy Arm-based CPUs within their cloud platforms to enhance control over configurations and lower expenses. These companies integrate such processors to customize data center operations, aligning hardware with specific workload demands in cloud computing services.
Arm licenses its instruction set technology, essential for building compatible chips, and provides designs that accelerate partner development of Arm-based processors. Arm does not manufacture CPUs itself but enables rapid production through these resources. Under Monday’s announcement, custom Neoverse chips incorporate a new protocol for seamless data movement between CPUs and GPUs, facilitating efficient communication in high-performance computing tasks.
In traditional servers, the CPU served as the primary component. Generative AI infrastructure centers on AI accelerator chips, predominantly Nvidia GPUs, with configurations supporting up to eight GPUs paired with a single CPU. This structure prioritizes accelerator performance for processing intensive AI models and data.
In September, Nvidia committed $5 billion to Intel, the leading CPU manufacturer. A core element of this investment enables Intel CPUs to connect with Nvidia’s NVLink technology in AI servers, broadening compatibility options.
Nvidia agreed to acquire Arm for $40 billion in 2020, but regulators in the U.S. and U.K. blocked the deal in 2022. As of February, Nvidia retained a small stake in Arm, which SoftBank majority-owns. Earlier this month, SoftBank sold its entire Nvidia stake. SoftBank supports OpenAI’s Stargate project, planned to incorporate Arm technology alongside chips from Nvidia and AMD.





