AutonomicPerfectionist/PipeInfer

PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation

36
/ 100
Emerging

This project helps machine learning engineers and researchers accelerate how quickly large language models (LLMs) generate text. By using two models—a small 'speculative' one and a large 'target' one—it can produce responses much faster than a single model. You provide your LLMs and a text prompt, and it outputs the generated text at a significantly increased speed.

No commits in the last 6 months.

Use this if you need to dramatically speed up text generation from Llama, Falcon, Baichuan, or other compatible large language models running across a multi-node computing cluster.

Not ideal if you are running LLM inference on a single machine or do not have access to a distributed computing environment.

LLM deployment distributed inference AI acceleration large language models natural language generation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

32

Forks

5

Language

C++

License

MIT

Last pushed

Nov 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AutonomicPerfectionist/PipeInfer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.