torchspec-project/TorchSpec
A PyTorch native library for training speculative decoding models
This tool helps AI engineers optimize large language models by training specialized 'draft' models for speculative decoding. It takes hidden states from existing inference engines as input and produces a smaller, faster draft model. This allows for significant speed improvements when generating text with large language models.
Use this if you are an AI engineer working on deploying large language models and want to speed up their text generation using speculative decoding.
Not ideal if you are looking for a general-purpose machine learning framework or are not specifically working with large language model optimization.
Stars
32
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/torchspec-project/TorchSpec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
sgl-project/SpecForge
Train speculative decoding models effortlessly and port them smoothly to SGLang serving.
structuredllm/syncode
Efficient and general syntactical decoding for Large Language Models
SafeAILab/EAGLE
Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).
romsto/Speculative-Decoding
Implementation of the paper Fast Inference from Transformers via Speculative Decoding, Leviathan...
hao-ai-lab/JacobiForcing
Jacobi Forcing: Fast and Accurate Diffusion-style Decoding