romsto/Speculative-Decoding
Implementation of the paper Fast Inference from Transformers via Speculative Decoding, Leviathan et al. 2023.
This project helps developers accelerate large language model (LLM) text generation without compromising output quality or requiring fine-tuning. It takes a large transformer model and a smaller 'drafter' model as input, then generates text much faster than traditional methods. This is ideal for machine learning engineers, AI researchers, or MLOps specialists working with transformer-based LLMs.
101 stars. No commits in the last 6 months.
Use this if you need to speed up the text generation (inference) process of your transformer models while maintaining the exact same output quality and without retraining.
Not ideal if you are a non-technical user, as this project requires programming knowledge and a good understanding of transformer model architecture to implement.
Stars
101
Forks
24
Language
Python
License
MIT
Category
Last pushed
Dec 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/romsto/Speculative-Decoding"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sgl-project/SpecForge
Train speculative decoding models effortlessly and port them smoothly to SGLang serving.
structuredllm/syncode
Efficient and general syntactical decoding for Large Language Models
SafeAILab/EAGLE
Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).
hao-ai-lab/JacobiForcing
Jacobi Forcing: Fast and Accurate Diffusion-style Decoding
kssteven418/BigLittleDecoder
[NeurIPS'23] Speculative Decoding with Big Little Decoder