psmarter/mini-infer

A high-performance LLM inference engine with PagedAttention | 基于PagedAttention的高性能大模型推理引擎

33
/ 100
Emerging

This project helps developers serve large language models (LLMs) more efficiently, especially when managing multiple requests concurrently. It takes your trained LLM and provides a high-performance HTTP API, similar to OpenAI's, allowing applications to send prompts and receive generated text. The end-users are AI/ML engineers, MLOps engineers, or backend developers responsible for deploying and scaling LLM-powered applications.

Use this if you need to deploy large language models with high throughput and low latency, especially in scenarios with many concurrent user requests.

Not ideal if you are an end-user looking for a ready-to-use application or if your primary goal is training LLMs rather than serving them.

LLM deployment MLOps AI infrastructure API serving large language models
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 13 / 25
Community 6 / 25

How are scores calculated?

Stars

61

Forks

3

Language

Python

License

MIT

Last pushed

Dec 31, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/psmarter/mini-infer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.