fvliang/DART

Official Implementation of DART (DART: Diffusion-Inspired Speculative Decoding for Fast LLM Inference).

32
/ 100
Emerging

This tool helps developers make Large Language Models (LLMs) respond faster. By speeding up the process of generating text, it reduces the wait time for AI applications. It takes an existing LLM and specialized DART model weights to produce quicker text output, ideal for anyone deploying or running LLMs.

Use this if you are an AI developer or MLOps engineer looking to significantly accelerate the inference speed of your deployed LLMs.

Not ideal if you are an end-user simply interacting with an LLM and do not have control over its deployment or underlying architecture.

LLM deployment AI infrastructure model serving MLOps generative AI
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 11 / 25
Community 3 / 25

How are scores calculated?

Stars

45

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Feb 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/fvliang/DART"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.