OpenBMB/UltraEval-Audio
Your faithful, impartial partner for audio evaluation — know yourself, know your rivals. 真实评测,知己知彼。
This framework helps audio researchers, machine learning engineers, and data scientists to thoroughly evaluate the performance of large audio models. It takes various audio models and standard audio datasets as input and outputs detailed performance metrics across diverse tasks like speech recognition, speech generation, and audio codec quality. This is for professionals building or deploying audio foundation models who need to understand their model's strengths and weaknesses against rivals.
281 stars.
Use this if you need to reliably benchmark and compare audio foundation models across a wide range of tasks and languages using standardized metrics and datasets, or want to replicate existing model evaluations with full transparency.
Not ideal if you are looking for a simple, quick way to test a single audio file's quality without needing comprehensive, multi-benchmark analysis.
Stars
281
Forks
21
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/OpenBMB/UltraEval-Audio"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kahne/fastwer
A PyPI package for fast word/character error rate (WER/CER) calculation
analyticsinmotion/werpy
🐍📦 Ultra-fast Python package for calculating and analyzing the Word Error Rate (WER). Built for...
fgnt/meeteval
MeetEval - A meeting transcription evaluation toolkit
tabahi/bournemouth-forced-aligner
Extract phoneme-level timestamps from speeh audio.
wq2012/SimpleDER
A lightweight library to compute Diarization Error Rate (DER).