kssteven418/BigLittleDecoder
[NeurIPS'23] Speculative Decoding with Big Little Decoder
This solution dramatically speeds up the process of generating text from large language models, such as those used for machine translation or summarization. It takes existing large and small language models as input and outputs generated text at approximately twice the speed, without losing quality. This is for AI/ML engineers and researchers who deploy and work with large language models for text generation tasks.
No commits in the last 6 months.
Use this if you need to accelerate text generation from your large language models without additional training or model architecture changes.
Not ideal if you do not work with pre-trained HuggingFace language models or if you need to accelerate tasks other than text generation.
Stars
96
Forks
12
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kssteven418/BigLittleDecoder"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sgl-project/SpecForge
Train speculative decoding models effortlessly and port them smoothly to SGLang serving.
structuredllm/syncode
Efficient and general syntactical decoding for Large Language Models
SafeAILab/EAGLE
Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).
romsto/Speculative-Decoding
Implementation of the paper Fast Inference from Transformers via Speculative Decoding, Leviathan...
hao-ai-lab/JacobiForcing
Jacobi Forcing: Fast and Accurate Diffusion-style Decoding