Alpamayo 1: The industry’s first chain-of-thought reasoning VLA model designed for the AV research community, now onHugging Face. With a 10-billion-parameter architecture, Alpamayo 1 uses video input to generate trajectories alongside reasoning traces, showing the logic behind each decision. Developers can adapt Alpamayo 1 into smaller runtime models for vehicle development, or use it as a foundation for AV development tools such as reasoning-based evaluators and auto-labeling systems. Alpamayo 1 provides open model weights and open-source inferencing scripts. Future models in the family will feature larger parameter counts, more detailed reasoning capabilities, more input and output flexibility, and options for commercial usage.
AlpaSim*: A fully open‑source, end-to-end simulation framework for high‑fidelity AV development, available on* GitHub. It provides realistic sensor modeling, configurable traffic dynamics and scalable closed‑loop testing environments, enabling rapid validation and policy refinement.
Physical AI Open Datasets*: NVIDIA offers the most diverse large-scale, open dataset for AV that contains 1,700+ hours of driving data collected across the widest range of geographies and conditions, covering rare and complex real-world edge cases essential for advancing reasoning architectures. These datasets are available on* Hugging Face.
1,700+ hours of driving data collected across the widest range of geographies and conditions
I thought you missed off a few zeros or an "m" at the end of the number there, so I checked....
Nope
Official press release:
TLDR: Open-weight VLAs.
Oh cool. Humans aren't good at driving.