MegEngine/InferLLM
a lightweight LLM model inference framework
This project helps developers integrate large language models (LLMs) into their applications, especially for on-device or resource-constrained environments. It takes quantized LLM models (like Alpaca, Llama-2, ChatGLM, or Baichuan) and outputs a runnable model for efficient local execution. Developers working on applications for mobile phones, embedded devices, or local desktop environments with limited GPU access would use this.
747 stars. No commits in the last 6 months.
Use this if you are a developer building an application that needs to run large language models efficiently on local hardware, including mobile devices, without relying on cloud services.
Not ideal if you are looking for a high-level API or a service that handles model deployment for you, or if you primarily work with unquantized, full-precision models on powerful data center GPUs.
Stars
747
Forks
94
Language
C++
License
Apache-2.0
Category
Last pushed
Apr 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MegEngine/InferLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...