Nvidia announced new AI models and infrastructure on Monday at the NeurIPS AI conference in San Diego, California, to advance physical AI for robots and autonomous vehicles that perceive and interact with the real world. The initiative focuses on developing backbone technology for autonomous driving research through open-source tools.
The semiconductor company introduced Alpamayo-R1, described as an open reasoning vision-language model tailored for autonomous driving research. Nvidia positions this as the first vision-language action model specifically focused on autonomous driving applications. Vision-language models integrate processing of both text and images, enabling vehicles to “see” their surroundings and generate decisions based on perceptual inputs from the environment.
Alpamayo-R1 builds directly on Nvidia’s Cosmos-Reason model, which functions as a reasoning model that deliberates through decisions prior to generating responses. This foundational approach allows for more structured decision-making in complex scenarios. Nvidia first released the broader Cosmos model family in January 2025, establishing a series of AI tools designed for advanced reasoning tasks. In August 2025, the company expanded this lineup with additional models to enhance capabilities in physical AI domains.
Video: Nvidia
According to a Nvidia blog post, technology such as Alpamayo-R1 plays a critical role for companies pursuing level 4 autonomous driving. This level defines full autonomy within a designated operational area and under particular conditions, where vehicles operate without human intervention in those parameters. The reasoning capabilities embedded in the model aim to equip autonomous vehicles with the “common sense” required to handle nuanced driving decisions in ways comparable to human drivers.
Developers can access Alpamayo-R1 immediately on platforms including GitHub and Hugging Face, facilitating widespread adoption and experimentation in autonomous driving projects. In parallel, Nvidia released the Cosmos Cookbook on GitHub, comprising step-by-step guides, inference resources, and post-training workflows. These materials assist developers in training and deploying Cosmos models for targeted use cases, with coverage extending to data curation processes, synthetic data generation techniques, and comprehensive model evaluation methods.





