symblai/speech-recognition-evaluation

Evaluate results from ASR/Speech-to-Text quickly

47
/ 100
Emerging

This tool helps you quickly assess how accurately an automated speech-to-text system transcribes audio. You provide two text files: one with a human-generated transcript (the 'gold standard') and another from the automated system. It then calculates metrics like Word Error Rate and highlights differences to show you how well your speech recognition is performing. This is for anyone who uses or develops speech-to-text systems and needs to evaluate their accuracy.

No commits in the last 6 months. Available on npm.

Use this if you need to compare the quality of an automatically generated transcript against a human-verified one to understand its accuracy.

Not ideal if you need to transcribe audio files; this tool focuses solely on evaluating existing text transcripts.

speech-to-text transcription-quality ASR-evaluation voice-AI content-moderation
Stale 6m
Maintenance 0 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

41

Forks

7

Language

JavaScript

License

Apache-2.0

Last pushed

Dec 28, 2021

Commits (30d)

0

Dependencies

4

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/symblai/speech-recognition-evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.