Faster-Local-Voice-AI and Local-Voice
These are ecosystem siblings—the second is a simplified, refactored successor that reduces the first's complexity by removing JACK/PipeWire audio routing infrastructure in favor of direct Vosk/Piper integration for the same offline voice assistant use case.
About Faster-Local-Voice-AI
m15-ai/Faster-Local-Voice-AI
A real-time, fully local voice AI system optimized for low-resource devices like an 8GB Ubuntu laptop with no GPU, achieving sub-second STT-to-TTS latency using Ollama, Vosk, Piper, and JACK/PipeWire. Open-source and privacy-focused for offline conversational AI.
This project helps you create a fully private, real-time voice assistant that runs entirely on your own computer, even an older laptop without a powerful graphics card. You speak into your microphone, and the AI processes your speech, generates a response, and speaks it back to you, all without sending any data to the cloud. It's designed for anyone who needs immediate, natural spoken interaction with an AI while keeping their conversations completely private.
About Local-Voice
m15-ai/Local-Voice
A real-time, offline voice assistant for Linux and Raspberry Pi. Uses local LLMs (via Ollama), speech-to-text (Vosk), and text-to-speech (Piper) for fast, wake-free voice interaction. No cloud. No APIs. Just Python, a mic, and your voice.
This project offers a personal, real-time voice assistant that runs entirely on your local computer, without needing internet access or cloud services. You speak into a microphone, and the assistant processes your request using a local AI model to generate a spoken response. It's designed for anyone who wants a private, responsive voice assistant for their home or office, especially on devices like a Raspberry Pi or Linux desktop.
Scores updated daily from GitHub, PyPI, and npm data. How scores work