waybarrios/dgx-spark-finetune-llm

LLM fine-tuning with LoRA + NVFP4/MXFP8 on NVIDIA DGX Spark (Blackwell GB10)

24
/ 100
Experimental

This project helps machine learning practitioners efficiently customize large language models (LLMs) for specific tasks or data using their own datasets. It takes an existing LLM and your custom data as input, producing a specialized, smaller model that performs better on your unique prompts. Data scientists, ML engineers, or researchers working with LLMs would use this to create highly relevant AI models.

Use this if you need to fine-tune a large language model with your specialized data on powerful NVIDIA DGX Spark and Blackwell GPUs for optimal performance and memory efficiency.

Not ideal if you don't have access to NVIDIA Blackwell GPUs or if your fine-tuning needs don't require high-precision quantization and advanced hardware optimization.

LLM-customization AI-model-training machine-learning-operations natural-language-processing GPU-accelerated-computing
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Python

License

MIT

Last pushed

Dec 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/waybarrios/dgx-spark-finetune-llm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.