MegEngine/InferLLM

a lightweight LLM model inference framework

46
/ 100
Emerging

This project helps developers integrate large language models (LLMs) into their applications, especially for on-device or resource-constrained environments. It takes quantized LLM models (like Alpaca, Llama-2, ChatGLM, or Baichuan) and outputs a runnable model for efficient local execution. Developers working on applications for mobile phones, embedded devices, or local desktop environments with limited GPU access would use this.

747 stars. No commits in the last 6 months.

Use this if you are a developer building an application that needs to run large language models efficiently on local hardware, including mobile devices, without relying on cloud services.

Not ideal if you are looking for a high-level API or a service that handles model deployment for you, or if you primarily work with unquantized, full-precision models on powerful data center GPUs.

mobile-application-development on-device-AI edge-computing AI-inference-optimization embedded-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

747

Forks

94

Language

C++

License

Apache-2.0

Last pushed

Apr 07, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MegEngine/InferLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.