NVIDIA-NeMo/Automodel
Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support
This tool helps machine learning engineers and researchers adapt large language models (LLMs) and vision-language models (VLMs) from Hugging Face for specific tasks. You input an existing Hugging Face model and your specialized dataset, and it outputs a fine-tuned, more accurate model optimized for your particular use case. It's designed for individuals developing custom AI solutions that require state-of-the-art foundation models.
366 stars.
Use this if you need to quickly and efficiently fine-tune or pre-train large-scale language or vision models from Hugging Face on specialized data, especially when working with NVIDIA GPUs.
Not ideal if you are looking for a no-code solution or primarily work with smaller, conventional machine learning models that don't require distributed training.
Stars
366
Forks
93
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVIDIA-NeMo/Automodel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...