m15-ai/Faster-Local-Voice-AI

A real-time, fully local voice AI system optimized for low-resource devices like an 8GB Ubuntu laptop with no GPU, achieving sub-second STT-to-TTS latency using Ollama, Vosk, Piper, and JACK/PipeWire. Open-source and privacy-focused for offline conversational AI.

37
/ 100
Emerging

This project helps you create a fully private, real-time voice assistant that runs entirely on your own computer, even an older laptop without a powerful graphics card. You speak into your microphone, and the AI processes your speech, generates a response, and speaks it back to you, all without sending any data to the cloud. It's designed for anyone who needs immediate, natural spoken interaction with an AI while keeping their conversations completely private.

No commits in the last 6 months.

Use this if you need an instant, conversational AI experience that prioritizes privacy and operates offline on standard computer hardware.

Not ideal if you need an AI that can be interrupted mid-sentence or if you prefer a system managed entirely by a third-party service.

privacy-focused-ai offline-ai-assistant local-voice-interface personal-ai speech-recognition
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 14 / 25

How are scores calculated?

Stars

23

Forks

4

Language

Python

License

MIT

Last pushed

Jul 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/m15-ai/Faster-Local-Voice-AI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.