romsto/Speculative-Decoding

Implementation of the paper Fast Inference from Transformers via Speculative Decoding, Leviathan et al. 2023.

45
/ 100
Emerging

This project helps developers accelerate large language model (LLM) text generation without compromising output quality or requiring fine-tuning. It takes a large transformer model and a smaller 'drafter' model as input, then generates text much faster than traditional methods. This is ideal for machine learning engineers, AI researchers, or MLOps specialists working with transformer-based LLMs.

101 stars. No commits in the last 6 months.

Use this if you need to speed up the text generation (inference) process of your transformer models while maintaining the exact same output quality and without retraining.

Not ideal if you are a non-technical user, as this project requires programming knowledge and a good understanding of transformer model architecture to implement.

LLM-inference natural-language-generation transformer-models model-acceleration
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

101

Forks

24

Language

Python

License

MIT

Last pushed

Dec 02, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/romsto/Speculative-Decoding"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.