fvliang/DART
Official Implementation of DART (DART: Diffusion-Inspired Speculative Decoding for Fast LLM Inference).
This tool helps developers make Large Language Models (LLMs) respond faster. By speeding up the process of generating text, it reduces the wait time for AI applications. It takes an existing LLM and specialized DART model weights to produce quicker text output, ideal for anyone deploying or running LLMs.
Use this if you are an AI developer or MLOps engineer looking to significantly accelerate the inference speed of your deployed LLMs.
Not ideal if you are an end-user simply interacting with an LLM and do not have control over its deployment or underlying architecture.
Stars
45
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/fvliang/DART"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ZHZisZZ/dllm
dLLM: Simple Diffusion Language Modeling
pengzhangzhi/Open-dLLM
Open diffusion language model for code generation — releasing pretraining, evaluation,...
EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM...
THUDM/LongWriter
[ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
AIoT-MLSys-Lab/SVD-LLM
[ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2