OpenBMB/UltraEval-Audio

Your faithful, impartial partner for audio evaluation — know yourself, know your rivals. 真实评测,知己知彼。

49
/ 100
Emerging

This framework helps audio researchers, machine learning engineers, and data scientists to thoroughly evaluate the performance of large audio models. It takes various audio models and standard audio datasets as input and outputs detailed performance metrics across diverse tasks like speech recognition, speech generation, and audio codec quality. This is for professionals building or deploying audio foundation models who need to understand their model's strengths and weaknesses against rivals.

281 stars.

Use this if you need to reliably benchmark and compare audio foundation models across a wide range of tasks and languages using standardized metrics and datasets, or want to replicate existing model evaluations with full transparency.

Not ideal if you are looking for a simple, quick way to test a single audio file's quality without needing comprehensive, multi-benchmark analysis.

audio-AI speech-recognition speech-synthesis audio-processing model-benchmarking
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

281

Forks

21

Language

Python

License

Apache-2.0

Last pushed

Feb 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/OpenBMB/UltraEval-Audio"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.