kssteven418/BigLittleDecoder

[NeurIPS'23] Speculative Decoding with Big Little Decoder

39
/ 100
Emerging

This solution dramatically speeds up the process of generating text from large language models, such as those used for machine translation or summarization. It takes existing large and small language models as input and outputs generated text at approximately twice the speed, without losing quality. This is for AI/ML engineers and researchers who deploy and work with large language models for text generation tasks.

No commits in the last 6 months.

Use this if you need to accelerate text generation from your large language models without additional training or model architecture changes.

Not ideal if you do not work with pre-trained HuggingFace language models or if you need to accelerate tasks other than text generation.

large-language-models machine-translation text-summarization natural-language-generation generative-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

96

Forks

12

Language

Python

License

Apache-2.0

Last pushed

Feb 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kssteven418/BigLittleDecoder"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.