torchspec-project/TorchSpec

A PyTorch native library for training speculative decoding models

37
/ 100
Emerging

This tool helps AI engineers optimize large language models by training specialized 'draft' models for speculative decoding. It takes hidden states from existing inference engines as input and produces a smaller, faster draft model. This allows for significant speed improvements when generating text with large language models.

Use this if you are an AI engineer working on deploying large language models and want to speed up their text generation using speculative decoding.

Not ideal if you are looking for a general-purpose machine learning framework or are not specifically working with large language model optimization.

large-language-models LLM-deployment model-optimization AI-inference deep-learning-engineering
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 11 / 25
Community 9 / 25

How are scores calculated?

Stars

32

Forks

3

Language

Python

License

MIT

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/torchspec-project/TorchSpec"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.