waybarrios/dgx-spark-finetune-llm
LLM fine-tuning with LoRA + NVFP4/MXFP8 on NVIDIA DGX Spark (Blackwell GB10)
This project helps machine learning practitioners efficiently customize large language models (LLMs) for specific tasks or data using their own datasets. It takes an existing LLM and your custom data as input, producing a specialized, smaller model that performs better on your unique prompts. Data scientists, ML engineers, or researchers working with LLMs would use this to create highly relevant AI models.
Use this if you need to fine-tune a large language model with your specialized data on powerful NVIDIA DGX Spark and Blackwell GPUs for optimal performance and memory efficiency.
Not ideal if you don't have access to NVIDIA Blackwell GPUs or if your fine-tuning needs don't require high-precision quantization and advanced hardware optimization.
Stars
11
Forks
—
Language
Python
License
MIT
Category
Last pushed
Dec 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/waybarrios/dgx-spark-finetune-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.