eelbaz/dgx-spark-vllm-setup

One-command vLLM installation for NVIDIA DGX Spark with Blackwell GB10 GPUs (sm_121 architecture)

41
/ 100
Emerging

This project simplifies getting powerful large language models (LLMs) like Qwen or OPT running efficiently on your NVIDIA DGX Spark system equipped with Blackwell GB10 GPUs. It takes care of all the complex software setup, letting you input a model name and get a ready-to-use LLM API or Python environment for generating text or processing language tasks. Researchers, MLOps engineers, or AI developers working on cutting-edge LLM applications on DGX Spark platforms will find this particularly useful.

Use this if you need to quickly and reliably deploy and serve large language models on an NVIDIA DGX Spark server with Blackwell GB10 GPUs for high-performance inference.

Not ideal if you are using different GPU hardware, a non-DGX Spark system, or if you require a highly customized, manual build process from scratch.

LLM deployment GPU acceleration AI inference Large Language Models MLOps
No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 13 / 25
Community 13 / 25

How are scores calculated?

Stars

71

Forks

8

Language

Shell

License

MIT

Last pushed

Oct 28, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/eelbaz/dgx-spark-vllm-setup"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.