Nvidia has started shipping the DGX Spark, its smallest desktop AI supercomputer. The system integrates the company’s Grace Blackwell architecture to support advanced artificial intelligence workloads locally and is being released through major hardware partners.
The DGX Spark combines GPUs, CPUs, networking, and specialized AI software into a single unit. Nvidia states the system can deliver up to one petaflop of AI performance and includes 128 GB of unified memory. It arrives with preinstalled software designed for both AI model training and inference tasks. Orders for the base unit, priced at $4,000, will open on October 15, 2025, through Nvidia’s website. Systems from partners, including Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo, and MSI, are available globally.
Despite its performance claims, the DGX Spark’s 273 GB/s memory bandwidth limits its throughput for production-level inference, positioning it more for prototyping and experimental work. Benchmarks indicate its performance is approximately four times slower than the RTX Pro 6000 Blackwell workstation GPU. Due to its bandwidth constraints, it also trails the performance of the RTX 5090 when running large models.
The unit’s compact design maintains stable thermals while under load. It draws around 170 W of power from an external USB-C source, a configuration that can present challenges for office deployments. The total cost of ownership becomes less direct when scaling the system. For instance, connecting two DGX Spark units to run a 405-billion-parameter model requires additional ConnectX-7 200 GbE hardware, which is not included in the base price and complicates cost comparisons with public-cloud GPU options.
The DGX Spark is identified as suitable for specific, controlled environments. The NYU Global Frontier Lab noted its applicability for privacy-sensitive work in healthcare, which creates a basis for managed services covering procurement, HIPAA-compliant rollouts, and ongoing security. The system’s ability to support fine-tuning of models up to 70 billion parameters appeals to educational institutions and smaller biotech firms seeking local customization without exposing data to the cloud. This has opened a niche for turnkey AI lab setups.
Nvidia’s extensive partner network, spanning from Dell and HP to Lenovo and ASUS, provides a broad channel for market distribution. This allows integrators to bundle services such as installation, training, and support for organizations that do not have in-house AI expertise. In another recent development, Nvidia’s CEO highlighted the company’s first direct partnership with OpenAI.