ccs96307/fast-llm-inference

Accelerating LLM inference with techniques like speculative decoding, quantization, and kernel fusion, focusing on implementing state-of-the-art research papers.

22
/ 100
Experimental

This project helps AI developers and researchers make Large Language Models (LLMs) respond faster without losing accuracy. It takes state-of-the-art research papers on LLM acceleration and implements techniques like speculative decoding and quantization. The result is a more efficient LLM inference process, beneficial for anyone deploying or experimenting with LLMs who needs quicker response times.

No commits in the last 6 months.

Use this if you are an AI developer or researcher looking to improve the speed and efficiency of your Large Language Model deployments.

Not ideal if you are an end-user of an application powered by an LLM and are not involved in its technical implementation.

LLM deployment AI research model optimization machine learning engineering inference acceleration
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

Last pushed

Jul 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ccs96307/fast-llm-inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.