VITA-MLLM/Freeze-Omni

✨✨Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM

41
/ 100
Emerging

This project offers a speech-to-speech dialogue system that provides intelligent and near real-time spoken conversations. You speak into it, and it processes your input to generate a spoken response very quickly. It's designed for anyone needing an advanced, low-latency conversational AI experience, such as customer service agents or virtual assistant developers.

369 stars. No commits in the last 6 months.

Use this if you need a highly responsive conversational AI that understands spoken language and generates intelligent spoken replies with minimal delay.

Not ideal if your primary need is text-based interaction or if you operate in an environment with poor network connectivity or low-performance hardware.

conversational-ai speech-recognition natural-language-processing virtual-assistants customer-service
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

369

Forks

25

Language

Python

License

Last pushed

May 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VITA-MLLM/Freeze-Omni"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.