BlinkDL/RWKV-LM

RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.

57
/ 100
Established

RWKV is a powerful AI model architecture designed for language and multimodal tasks. It efficiently processes information to generate text or handle various data types, offering a unique blend of performance and resource efficiency. It is primarily used by AI practitioners and researchers who build and train large language models or similar AI systems.

14,414 stars. Actively maintained with 2 commits in the last 30 days.

Use this if you are an AI developer or researcher looking to train large language models with improved efficiency and constant memory usage, especially on resource-constrained hardware.

Not ideal if you are an end-user simply looking to chat with an existing AI model without custom training or fine-tuning.

large-language-models AI-architecture deep-learning-training efficient-AI multimodal-AI
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

14,414

Forks

997

Language

Python

License

Apache-2.0

Last pushed

Mar 05, 2026

Commits (30d)

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/BlinkDL/RWKV-LM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.