PureBee/purebee

A GPU defined in software. Runs Llama 3.2 1B at 3.6 tok/sec. Zero dependencies.

43
/ 100
Emerging

PureBee allows developers to run large language models like Llama 3.2 1B on any device using only a CPU, without needing a dedicated graphics card or complex drivers. It takes a language model as input and produces text responses, making advanced AI inference accessible on standard computing environments. This tool is for developers building applications where they need to integrate AI capabilities into environments lacking specialized GPU hardware.

Use this if you are a developer who needs to embed AI inference capabilities directly into applications running on standard CPUs, without reliance on GPU hardware or specific graphics drivers.

Not ideal if you already have access to powerful GPUs and are seeking the absolute fastest inference speeds, as PureBee prioritizes accessibility over raw computational power.

AI-inference edge-computing software-development machine-learning-deployment CPU-optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 16 / 25

How are scores calculated?

Stars

22

Forks

7

Language

JavaScript

License

Last pushed

Feb 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PureBee/purebee"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.