cactus-compute/cactus
Low-latency AI engine for mobile devices & wearables
Cactus is a powerful AI engine designed for running various AI models directly on mobile devices and wearables. It takes inputs like speech, images, or text and provides real-time AI responses, such as transcriptions, image analysis, or conversational outputs. This tool is for product managers, app developers, and device manufacturers who want to embed AI capabilities into their mobile products without relying heavily on cloud services.
4,430 stars. Actively maintained with 53 commits in the last 30 days.
Use this if you need to integrate fast, energy-efficient AI features like voice assistants, real-time image recognition, or intelligent chatbots directly into mobile apps or wearable devices with minimal memory usage.
Not ideal if your application primarily processes large, complex AI tasks on server-side infrastructure where device-specific optimizations and low-latency on-device inference are not critical.
Stars
4,430
Forks
328
Language
C
License
—
Category
Last pushed
Mar 13, 2026
Commits (30d)
53
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/cactus-compute/cactus"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
langbot-app/LangBot
Production-grade platform for building agentic IM bots - 生产级多平台智能机器人开发平台. 提供 Agent、知识库编排、插件系统 /...
open-webui/open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
sigoden/aichat
All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents, with...
rudrankriyam/Foundation-Models-Framework-Example
Example apps for Foundation Models Framework in iOS 26 and macOS 26
timmyy123/LLM-Hub
Local AI Assistant on your phone