VPanjeta/PyLLaMa-CPU
Fast LLaMa inference on CPU using llama.cpp for Python
This tool helps developers who want to run large language models (LLaMA) directly on their computer's CPU, without needing specialized graphics cards. You provide a LLaMA model file, and it quickly generates text outputs based on your prompts. This is ideal for developers building applications that integrate LLaMA capabilities into Python.
No commits in the last 6 months.
Use this if you are a Python developer and need to incorporate fast LLaMA text generation into your applications, running entirely on a CPU.
Not ideal if you are looking for a plug-and-play AI chat application or do not have experience with Python development and model conversion.
Stars
9
Forks
1
Language
C
License
MIT
Category
Last pushed
Mar 23, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VPanjeta/PyLLaMa-CPU"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...