RightNow-AI/picolm
Run a 1-billion parameter LLM on a $10 board with 256MB RAM
PicoLM helps developers integrate powerful language AI directly into small, low-cost devices like a Raspberry Pi Zero, creating fully offline AI agents. It takes text input (like a question or command) and produces a text response or structured JSON, without needing internet access or expensive cloud services. This is ideal for developers building embedded AI solutions or privacy-focused applications.
1,364 stars.
Use this if you are a developer looking to add a local, privacy-preserving large language model to a project running on budget-friendly hardware like a Raspberry Pi, or to create an offline AI assistant.
Not ideal if you need to run large, state-of-the-art LLMs that require significant computational power, or if your application already relies on cloud-based AI services.
Stars
1,364
Forks
159
Language
C
License
MIT
Category
Last pushed
Feb 22, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/RightNow-AI/picolm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
vllm-project/vllm-ascend
Community maintained hardware plugin for vLLM on Ascend
kvcache-ai/Mooncake
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
SemiAnalysisAI/InferenceX
Open Source Continuous Inference Benchmarking Qwen3.5, DeepSeek, GPTOSS - GB200 NVL72 vs MI355X...
sophgo/tpu-mlir
Machine learning compiler based on MLIR for Sophgo TPU.
uccl-project/uccl
UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache...