fastwer and werpy
These are direct competitors—both provide Python packages for calculating WER/CER metrics, with werpy emphasizing speed and scalability while fastwer focuses on computational efficiency, making them alternative solutions for the same ASR evaluation task rather than tools designed to work together.
About fastwer
kahne/fastwer
A PyPI package for fast word/character error rate (WER/CER) calculation
This tool helps speech scientists and researchers quickly evaluate the accuracy of their speech-to-text systems. You provide the text output from your system (hypothesis) and the correct, human-transcribed text (reference). It then calculates how many words or characters are incorrect, either for individual sentences or for an entire collection of text, providing a clear error rate.
About werpy
analyticsinmotion/werpy
🐍📦 Ultra-fast Python package for calculating and analyzing the Word Error Rate (WER). Built for the scalable evaluation of speech and transcription accuracy.
This tool helps you quickly and accurately measure how well a spoken phrase has been converted into written text, or how similar two pieces of text are. By comparing a "reference" (the correct text) to a "hypothesis" (the transcribed or predicted text), it calculates the Word Error Rate (WER) and shows you exactly where mistakes like insertions, deletions, or substitutions occurred. It's designed for speech-to-text engineers, transcription quality analysts, and researchers evaluating text generation systems.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work