NVIDIA announces Alpamayo family of open-source AI models and tools

With Alpamayo, mobility leaders such as JLR, Lucid and Uber, along with the AV research community can fast-track safe, reasoning‑based level 4 deployment roadmaps.

NVIDIA unveiled the NVIDIA Alpamayo family of open AI models, simulation tools and datasets designed to accelerate the next era of safe, reasoning‑based autonomous vehicle (AV) development.

AVs must safely operate across an enormous range of driving conditions. Rare, complex scenarios, often called the “long tail,” remain some of the toughest challenges for autonomous systems to safely master. Traditional AV architectures separate perception and planning, which can limit scalability when new or unusual situations arise. Recent advances in end-to-end learning have made significant progress, but overcoming these long-tail edge cases requires models that can safely reason about cause and effect, especially when situations fall outside a model’s training experience.

The Alpamayo family introduces chain-of-thought, reasoning-based vision language action (VLA) models that bring humanlike thinking to AV decision-making. These systems can think through novel or rare scenarios step by step, improving driving capability and explainability — which is critical to scaling trust and safety in intelligent vehicles — and are underpinned by the NVIDIA Halos safety system.

A complete, open ecosystem for reasoning‑based autonomy

Alpamayo integrates three foundational pillars — open models, simulation frameworks and datasets — into a cohesive, open ecosystem that any automotive developer or research team can build upon.

Rather than running directly in-vehicle, Alpamayo models serve as large-scale teacher models that developers can fine-tune and distill into the backbones of their complete AV stacks.

At CES, NVIDIA is releasing:

  • Alpamayo 1: The industry’s first chain-of-thought reasoning VLA model designed for the AV research community, now on Hugging Face. With a 10-billion-parameter architecture, Alpamayo 1 uses video input to generate trajectories alongside reasoning traces, showing the logic behind each decision. Developers can adapt Alpamayo 1 into smaller runtime models for vehicle development, or use it as a foundation for AV development tools such as reasoning-based evaluators and auto-labeling systems. Alpamayo 1 provides open model weights and open-source inferencing scripts. Future models in the family will feature larger parameter counts, more detailed reasoning capabilities, more input and output flexibility, and options for commercial usage.
  • AlpaSim: A fully open‑source, end-to-end simulation framework for high‑fidelity AV development, available on GitHub. It provides realistic sensor modeling, configurable traffic dynamics and scalable closed‑loop testing environments, enabling rapid validation and policy refinement.
  • Physical AI Open Datasets: NVIDIA offers the most diverse large-scale, open dataset for AV that contains 1,700+ hours of driving data collected across the widest range of geographies and conditions, covering rare and complex real-world edge cases essential for advancing reasoning architectures. These datasets are available on Hugging Face.

Together, these tools enable a self-reinforcing development loop for reasoning-based AV stacks.

Broad AV industry supports Alpamayo

Mobility leaders and industry experts, including Lucid, JLR, Uber and Berkeley DeepDrive, are showing interest in Alpamayo to develop reasoning-based AV stacks that will enable level 4 autonomy.

Beyond Alpamayo, developers can tap into NVIDIA’s rich library of tools and models, including from the NVIDIA Cosmos and NVIDIA Omniverse platforms. Developers can fine-tune model releases on proprietary fleet data, integrate them into the NVIDIA DRIVE Hyperion architecture built with NVIDIA DRIVE AGX Thor accelerated compute, and validate performance in simulation before commercial deployment.

For more information, visit nvidia.com.