fgnt/meeteval

MeetEval - A meeting transcription evaluation toolkit

61
/ 100
Established

This tool helps researchers and engineers assess the accuracy of automatic speech recognition (ASR) systems for meeting recordings. You input reference transcripts (what was actually said) and hypothesis transcripts (what the ASR system output), and it calculates various Word Error Rate (WER) metrics. The output shows how well the ASR system performed, accounting for challenges like multiple speakers and overlapping speech.

149 stars. Used by 1 other package. Available on PyPI.

Use this if you need to precisely measure the transcription quality of ASR systems for multi-speaker meetings and understand different types of errors.

Not ideal if you're evaluating ASR for single-speaker scenarios or if you need to analyze alternative transcriptions (e.g., "i've { um / uh } as far as i'm concerned").

speech-recognition meeting-transcription ASR-evaluation audio-analysis natural-language-processing
Maintenance 10 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

149

Forks

18

Language

Python

License

MIT

Last pushed

Jan 27, 2026

Commits (30d)

0

Dependencies

7

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/fgnt/meeteval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.