RWKV/rwkv.cpp
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
This project helps developers run large language models (LLMs) like RWKV on standard computer CPUs or even specialized GPUs with better speed and reduced memory usage. It takes an existing RWKV model file, converts it into an optimized format, and then allows for text generation or chatbot interactions directly on your machine. This is designed for software developers or machine learning engineers who need to deploy and run LLMs efficiently on diverse hardware, especially those with limited GPU resources.
1,563 stars. No commits in the last 6 months.
Use this if you are a developer looking to deploy and run RWKV-based large language models efficiently on CPU-centric systems or need to optimize model size and inference speed.
Not ideal if you are a non-technical end-user looking for a ready-to-use application or service, as this requires development setup and coding knowledge.
Stars
1,563
Forks
125
Language
C++
License
MIT
Category
Last pushed
Mar 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RWKV/rwkv.cpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...