analyticsinmotion/werpy
🐍📦 Ultra-fast Python package for calculating and analyzing the Word Error Rate (WER). Built for the scalable evaluation of speech and transcription accuracy.
This tool helps you quickly and accurately measure how well a spoken phrase has been converted into written text, or how similar two pieces of text are. By comparing a "reference" (the correct text) to a "hypothesis" (the transcribed or predicted text), it calculates the Word Error Rate (WER) and shows you exactly where mistakes like insertions, deletions, or substitutions occurred. It's designed for speech-to-text engineers, transcription quality analysts, and researchers evaluating text generation systems.
Used by 1 other package. Available on PyPI.
Use this if you need to objectively quantify the accuracy of speech recognition systems, transcription services, or text generation models by comparing their output against a correct version.
Not ideal if you only need a simple 'match/no match' comparison, or if your primary goal is spell-checking or grammatical correction rather than detailed error analysis of text transcription.
Stars
23
Forks
6
Language
Python
License
BSD-3-Clause
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/analyticsinmotion/werpy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
kahne/fastwer
A PyPI package for fast word/character error rate (WER/CER) calculation
fgnt/meeteval
MeetEval - A meeting transcription evaluation toolkit
tabahi/bournemouth-forced-aligner
Extract phoneme-level timestamps from speeh audio.
wq2012/SimpleDER
A lightweight library to compute Diarization Error Rate (DER).
readbeyond/aeneas
aeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka...