umbertocappellazzo/Omni-AVSR
Official Pytorch implementation of "Omni-AVSR: Towards Unified Multimodal Speech Recognition with Large Language Models" [IEEE ICASSP 2026].
This project offers a unified solution for converting spoken language from audio, video, or combined audio-visual sources into text. It takes raw audio, video footage of a speaker, or both, and outputs a written transcript of what was said. Anyone working with multimedia content, such as media analysts, documentary producers, or researchers studying human communication, can use this to efficiently process and transcribe speech.
Use this if you need to accurately transcribe speech from videos, audio recordings, or a combination of both, and want a single, efficient tool to handle all these formats.
Not ideal if you only need basic text transcription without considering visual cues, or if you require transcription for very niche languages not covered by standard speech recognition datasets.
Stars
31
Forks
2
Language
Python
License
—
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/umbertocappellazzo/Omni-AVSR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
canopyai/Orpheus-TTS
Towards Human-Sounding Speech
lifeiteng/vall-e
PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo...
Plachtaa/VALL-E-X
An open source implementation of Microsoft's VALL-E X zero-shot TTS model. Demo is available in...
primepake/learnable-speech
This repo is text to speech with learnable audio encoder without alignment with transcript reference
ExplainableML/ZerAuCap
[NeurIPS 2023 - ML for Audio Workshop (Oral)] Zero-shot audio captioning with audio-language...