NVIDIA-NeMo/Automodel

Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support

59
/ 100
Established

This tool helps machine learning engineers and researchers adapt large language models (LLMs) and vision-language models (VLMs) from Hugging Face for specific tasks. You input an existing Hugging Face model and your specialized dataset, and it outputs a fine-tuned, more accurate model optimized for your particular use case. It's designed for individuals developing custom AI solutions that require state-of-the-art foundation models.

366 stars.

Use this if you need to quickly and efficiently fine-tune or pre-train large-scale language or vision models from Hugging Face on specialized data, especially when working with NVIDIA GPUs.

Not ideal if you are looking for a no-code solution or primarily work with smaller, conventional machine learning models that don't require distributed training.

large-language-models vision-language-models model-customization ai-model-training applied-ai
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 24 / 25

How are scores calculated?

Stars

366

Forks

93

Language

Python

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVIDIA-NeMo/Automodel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.