second-state/echokit_server
Open Source Voice Agent Platform
EchoKit Server helps developers create and manage custom voice AI agents for EchoKit devices. It takes spoken input from the device, processes it using various AI models for understanding and response generation, and sends back an audible reply. This is used by software developers who want to build and deploy specialized conversational AI experiences.
551 stars.
Use this if you are a software developer looking to build custom voice AI applications that integrate with physical EchoKit devices and require flexible control over speech recognition, language models, and text-to-speech services.
Not ideal if you are an end-user simply wanting to use an off-the-shelf voice assistant, as this requires technical expertise to set up and configure AI services.
Stars
551
Forks
77
Language
Rust
License
GPL-3.0
Category
Last pushed
Feb 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/second-state/echokit_server"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
bigsk1/voice-chat-ai
🎙️ Speak with AI - Run locally using Ollama, OpenAI, Anthropic or xAI - Speech uses SparkTTS,...
digiteinfotech/kairon
Agentic AI platform that harnesses Visual LLM Chaining to build proactive digital assistants
withcatai/catai
Run AI ✨ assistant locally! with simple API for Node.js 🚀
AmberSahdev/Open-Interface
Control Any Computer Using LLMs.
syxanash/maxheadbox
Tiny truly local voice-activated LLM Agent that runs on a Raspberry Pi