taresh18/conversify

🗣️ Real‑time, low‑latency voice, vision, and conversational‑memory AI assistant built on LiveKit and local LLMs ✨

45
/ 100
Emerging

This project helps developers build highly responsive AI assistants that can engage in natural, real-time voice conversations and understand visual information. It takes live audio and video streams as input, processes them with local language and vision models, and generates spoken responses almost instantly. Developers who need to create custom, low-latency conversational AI applications with persistent memory would use this.

108 stars. No commits in the last 6 months.

Use this if you are a developer building a real-time AI assistant that requires extremely low latency for voice and basic vision capabilities, leveraging locally hosted AI models.

Not ideal if you need a ready-to-use AI assistant application without any coding, or if your primary requirement is complex, high-resolution image analysis.

AI-assistant-development real-time-voice-AI local-LLM-integration conversational-AI-prototyping voice-user-interfaces
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

108

Forks

20

Language

Python

License

Apache-2.0

Last pushed

Jun 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/taresh18/conversify"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.