zRzRzRzRzRzRzR/lm-fly
大模型推理框架加速,让 LLM 飞起来
This project helps developers accelerate the performance of large language models (LLMs) when running them locally. It takes an LLM and an acceleration framework as input and provides an optimized setup for faster inference. Developers who are deploying LLMs on their own hardware would use this to improve response times.
No commits in the last 6 months.
Use this if you are a developer looking to speed up your local deployments of large language models for better performance and efficiency.
Not ideal if you are not a developer or if you primarily use cloud-based LLM APIs without needing local deployment optimization.
Stars
24
Forks
5
Language
Python
License
MIT
Category
Last pushed
May 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zRzRzRzRzRzRzR/lm-fly"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...