BaohaoLiao/RSD

[ICML 2025] Reward-guided Speculative Decoding (RSD) for efficiency and effectiveness.

35
/ 100
Emerging

This project helps large language model (LLM) developers optimize their models for complex reasoning tasks. It takes an LLM as input and, through a process of 'speculative decoding' guided by a reward model, produces faster and more accurate LLM outputs. AI engineers and researchers building and deploying LLMs for advanced applications would use this.

No commits in the last 6 months.

Use this if you need to make your large language models perform complex reasoning tasks more efficiently, achieving better accuracy with significantly reduced computational cost.

Not ideal if you are not working with large language models, do not have multiple GPUs available, or are looking for a solution that doesn't require modifying model configurations.

LLM-optimization AI-efficiency model-inference computational-linguistics deep-learning-engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

56

Forks

5

Language

Python

License

Apache-2.0

Last pushed

May 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/BaohaoLiao/RSD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.