sarthaxxxxx/AVROBUSTBENCH

Benchmarking robustness of audio-visual recognition models at test-time

34
/ 100
Emerging

This project helps evaluate how well AI models designed to understand both sound and video can handle real-world distortions. It takes existing audio-visual data and applies a wide range of realistic, simultaneous audio and visual corruptions like noise, weather effects, or crowd sounds. Scientists and machine learning engineers working on robust audio-visual AI will use this to test and improve their models' reliability in challenging conditions.

Use this if you need to thoroughly assess how robust your audio-visual recognition models are when faced with common, correlated distortions in both sound and visuals.

Not ideal if you are only interested in evaluating models against single-modality corruptions or if your models do not process both audio and visual inputs.

AI model evaluation audio-visual processing robustness testing machine learning research computer vision
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sarthaxxxxx/AVROBUSTBENCH"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.