sarthaxxxxx/AVROBUSTBENCH
Benchmarking robustness of audio-visual recognition models at test-time
This project helps evaluate how well AI models designed to understand both sound and video can handle real-world distortions. It takes existing audio-visual data and applies a wide range of realistic, simultaneous audio and visual corruptions like noise, weather effects, or crowd sounds. Scientists and machine learning engineers working on robust audio-visual AI will use this to test and improve their models' reliability in challenging conditions.
Use this if you need to thoroughly assess how robust your audio-visual recognition models are when faced with common, correlated distortions in both sound and visuals.
Not ideal if you are only interested in evaluating models against single-modality corruptions or if your models do not process both audio and visual inputs.
Stars
10
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sarthaxxxxx/AVROBUSTBENCH"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...