umbertocappellazzo/Llama-AVSR
Official Pytorch implementation of "Large Language Models are Strong Audio-Visual Speech Recognition Learners" [ICASSP 2025] and "Mitigating Attention Sinks and Massive Activations in Audio-Visual Speech Recognition with LLMs" [ICASSP 2026].
This project helps researchers and developers advance speech recognition technology by providing a specialized large language model. It takes raw audio, video, or both as input and outputs highly accurate transcriptions of spoken language. Its primary users are AI/ML researchers and engineers working on audio-visual speech recognition systems.
Use this if you are a researcher or engineer looking to develop or improve advanced speech recognition systems that leverage both audio and visual cues, especially in challenging environments.
Not ideal if you need an out-of-the-box solution for general-purpose speech-to-text conversion without deep technical expertise in AI model training.
Stars
57
Forks
5
Language
Python
License
—
Category
Last pushed
Jan 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/umbertocappellazzo/Llama-AVSR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice