NVIDIA-NeMo / NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing NVIDIA-NeMo/NeMo in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler view**NVIDIA NeMo Speech Collection** Latest News NVIDIA-Nemotron-3-Nano-30B-A3B is out with full reproducible script and recipes! Check out NeMo Megatron-Bridge , NeMo AutoModel , NeMo-RL and NGC container to try them! (2025-12-15) ⚠️ Pivot notice: This repo has pivoted to focus on audio, speech, and multimodal LLM only. Please refer to NeMo Framework Github Org for the complete list of repos under NeMo Framework NeMo 2.0, with its support for Megatron Core, LLMs, and VLMs became deprecated in 25.11, and replaced by NeMo Megatron-Bridge and NeMo AutoModel . More details can be found in the NeMo Framework GitHub org readme . (2025-10-10) The following collections are no longer available avlm · diffusion · llm · multimodal · multimodal-autoregressive · nlp · speechlm · vision · vlm Pretrain and finetune :hugs:Hugging Face models via AutoModel NeMo Framework's latest feature AutoModel enables broad support for :hugs:Hugging Face models, with 25.04 focusing on • AutoModelForCausalLM in the Text Generation category • AutoModelForImageTextToText in the Image-Text-to-Text category More Details in Blog: Run Hugging Face Models Instantly with Day-0 Support from NVIDIA NeMo Framework . Future releases will enable support for more model families such as Video Generation models.(2025-05-19) Training on Blackwell using NeMo NeMo Framework has added Blackwell support, with performance benchmarks on GB200 & B200 . More optimizations to come in the upcoming releases.(2025-05-19) Training Performance on GPU Tuning Guide NeMo Framework has published a comprehensive guide for performance tuning to achieve optimal throughput ! (2025-05-19) New Models Support NeMo Framework has added support for latest community models - Llama 4 , Flux , Llama Nemotron , Hyena & Evo2 , Qwen2-VL , Qwen2.5 , Gemma3, Qwen3-30B&32B.(2025-05-19) NeMo Framework 2.0 We've released NeMo 2.0, an update on the NeMo Framework which prioritizes modularity and ease-of-use. Please refer to the NeMo Framework User Guide to get started. New Cosmos World Foundation Models Support Advancing Physical AI with NVIDIA Cosmos World Foundation Model Platform (2025-01-09) The end-to-end NVIDIA Cosmos platform accelerates world model development for physical AI systems. Built on CUDA, Cosmos combines state-of-the-art world foundation models, video tokenizers, and AI-accelerated data processing pipelines. Developers can accelerate world model development by fine-tuning Cosmos world foundation models or building new ones from the ground up. These models create realistic synthetic videos of environments and interactions, providing a scalable foundation for training complex systems, from simulating humanoid robots performing advanced actions to developing end-to-end autonomous driving models. Accelerate Custom Video Foundation Model Pipelines with New NVIDIA NeMo Framework Capabilities (2025-01-07) The NeMo Framework now supports training and customizing the NVIDIA Cosmos collection of world foundation models. Cosmos leverages advanced text-to-world generation techniques to create fluid, coherent video content from natural language prompts. You can also now accelerate your video processing step using the NeMo Curator library, which provides optimized video processing and captioning features that can deliver up to 89x faster video processing when compared to an unoptimized CPU pipeline. Large Language Models and Multimodal Models State-of-the-Art Multimodal Generative AI Model Development with NVIDIA NeMo (2024-11-06) NVIDIA recently announced significant enhancements to the NeMo platform, focusing on multimodal generative AI models. The update includes NeMo Curator and the Cosmos tokenizer, which streamline the data curation process and enhance the quality of visual data. These tools are designed to handle large-scale data efficiently, making it easier to develop high-quality AI models for various applications, including robotics and autonomous driving. The Cosmos tokenizers, in particular, efficiently map visual data into compact, semantic tokens, which is crucial for training large-scale generative models. The tokenizer is available now on the NVIDIA/cosmos-tokenizer GitHub repo and on Hugging Face . New Llama 3.1 Support (2024-07-23) The NeMo Framework now supports training and customizing the Llama 3.1 collection of LLMs from Meta. Accelerate your Generative AI Distributed Training Workloads with the NVIDIA NeMo Framework on Amazon EKS (2024-07-16) NVIDIA NeMo Framework now runs distributed training workloads on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. For step-by-step instructions on creating an EKS cluster and running distributed training workloads with NeMo, see the GitHub repository here. NVIDIA NeMo Accelerates LLM Innovation with Hybrid State Space Model Support (2024/06/17) NVIDIA NeMo and Megatron Core now support pre-training and fine-tuning of state space models (SSMs). NeMo also supports training models based on the Griffin architecture as described by Google DeepMind. NVIDIA releases 340B base, instruct, and reward models pretrained on a total of 9T tokens. (2024-06-18) See documentation and tutorials for SFT, PEFT, and PTQ with Nemotron 340B in the NeMo Framework User Guide. NVIDIA sets new generative AI performance and scale records in MLPerf Training v4.0 (2024/06/12) Using NVIDIA NeMo Framework and NVIDIA Hopper GPUs NVIDIA was able to scale to 11,616 H100 GPUs and achieve near-linear performance scaling on LLM pretraining. NVIDIA also achieved the highest LLM fine-tuning performance and raised the bar for text-to-image training. Accelerate your generative AI journey with NVIDIA NeMo Framework on GKE (2024/03/16) An end-to-end walkthrough to train generative AI models on the Google Kubernetes Engine (GKE) using the NVIDIA NeMo Framework is available at https://github.com/GoogleCloudPlatform/nvidia-nemo-on-gke. The walkthrough includes deta…