psmarter/mini-infer
A high-performance LLM inference engine with PagedAttention | 基于PagedAttention的高性能大模型推理引擎
This project helps developers serve large language models (LLMs) more efficiently, especially when managing multiple requests concurrently. It takes your trained LLM and provides a high-performance HTTP API, similar to OpenAI's, allowing applications to send prompts and receive generated text. The end-users are AI/ML engineers, MLOps engineers, or backend developers responsible for deploying and scaling LLM-powered applications.
Use this if you need to deploy large language models with high throughput and low latency, especially in scenarios with many concurrent user requests.
Not ideal if you are an end-user looking for a ready-to-use application or if your primary goal is training LLMs rather than serving them.
Stars
61
Forks
3
Language
Python
License
MIT
Category
Last pushed
Dec 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/psmarter/mini-infer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...