cactus-compute/cactus

Low-latency AI engine for mobile devices & wearables

66
/ 100
Established

Cactus is a powerful AI engine designed for running various AI models directly on mobile devices and wearables. It takes inputs like speech, images, or text and provides real-time AI responses, such as transcriptions, image analysis, or conversational outputs. This tool is for product managers, app developers, and device manufacturers who want to embed AI capabilities into their mobile products without relying heavily on cloud services.

4,430 stars. Actively maintained with 53 commits in the last 30 days.

Use this if you need to integrate fast, energy-efficient AI features like voice assistants, real-time image recognition, or intelligent chatbots directly into mobile apps or wearable devices with minimal memory usage.

Not ideal if your application primarily processes large, complex AI tasks on server-side infrastructure where device-specific optimizations and low-latency on-device inference are not critical.

on-device AI mobile AI edge AI wearable technology embedded AI
No Package No Dependents
Maintenance 22 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 19 / 25

How are scores calculated?

Stars

4,430

Forks

328

Language

C

License

Last pushed

Mar 13, 2026

Commits (30d)

53

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/cactus-compute/cactus"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.