EmbeddedLLM/embeddedllm
EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU
EmbeddedLLM helps you run large language models like Llama, Mistral, and Phi on your local computer, even if you don't have a high-end graphics card. It takes a model file and makes it accessible through an OpenAI-compatible API, letting you integrate it into your applications or use a built-in chatbot interface. This is ideal for developers or researchers who want to test or deploy LLMs without relying on cloud services.
No commits in the last 6 months.
Use this if you need to run popular large language models on your local machine using its existing integrated GPU, APU, or CPU, and want an easy way to interact with them via an OpenAI-compatible interface or a simple web UI.
Not ideal if you primarily work with cloud-based LLM APIs or require support for highly specialized model architectures not listed.
Stars
51
Forks
4
Language
Python
License
—
Category
Last pushed
Oct 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/EmbeddedLLM/embeddedllm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...