umbertocappellazzo/Llama-AVSR

Official Pytorch implementation of "Large Language Models are Strong Audio-Visual Speech Recognition Learners" [ICASSP 2025] and "Mitigating Attention Sinks and Massive Activations in Audio-Visual Speech Recognition with LLMs" [ICASSP 2026].

35
/ 100
Emerging

This project helps researchers and developers advance speech recognition technology by providing a specialized large language model. It takes raw audio, video, or both as input and outputs highly accurate transcriptions of spoken language. Its primary users are AI/ML researchers and engineers working on audio-visual speech recognition systems.

Use this if you are a researcher or engineer looking to develop or improve advanced speech recognition systems that leverage both audio and visual cues, especially in challenging environments.

Not ideal if you need an out-of-the-box solution for general-purpose speech-to-text conversion without deep technical expertise in AI model training.

speech-recognition computer-vision natural-language-processing multimodal-ai audio-analysis
No License No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

57

Forks

5

Language

Python

License

Last pushed

Jan 18, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/umbertocappellazzo/Llama-AVSR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.