fgnt/meeteval
MeetEval - A meeting transcription evaluation toolkit
This tool helps researchers and engineers assess the accuracy of automatic speech recognition (ASR) systems for meeting recordings. You input reference transcripts (what was actually said) and hypothesis transcripts (what the ASR system output), and it calculates various Word Error Rate (WER) metrics. The output shows how well the ASR system performed, accounting for challenges like multiple speakers and overlapping speech.
149 stars. Used by 1 other package. Available on PyPI.
Use this if you need to precisely measure the transcription quality of ASR systems for multi-speaker meetings and understand different types of errors.
Not ideal if you're evaluating ASR for single-speaker scenarios or if you need to analyze alternative transcriptions (e.g., "i've { um / uh } as far as i'm concerned").
Stars
149
Forks
18
Language
Python
License
MIT
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Dependencies
7
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/fgnt/meeteval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
kahne/fastwer
A PyPI package for fast word/character error rate (WER/CER) calculation
analyticsinmotion/werpy
🐍📦 Ultra-fast Python package for calculating and analyzing the Word Error Rate (WER). Built for...
tabahi/bournemouth-forced-aligner
Extract phoneme-level timestamps from speeh audio.
wq2012/SimpleDER
A lightweight library to compute Diarization Error Rate (DER).
readbeyond/aeneas
aeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka...