Picovoice/speech-to-text-benchmark

speech to text benchmark framework

57
/ 100
Established

This tool helps developers and machine learning engineers compare the performance of different speech-to-text engines. It takes audio datasets and chosen speech-to-text engines as input, then outputs detailed metrics like Word Error Rate, Punctuation Error Rate, processing efficiency (Core-Hour), and real-time responsiveness (Word Emission Latency). This allows a technical professional to objectively evaluate which engine best suits their application's accuracy, speed, and resource requirements.

683 stars. Actively maintained with 1 commit in the last 30 days.

Use this if you need to quantitatively compare the accuracy and efficiency of various speech-to-text services and models for your development project.

Not ideal if you are an end-user simply looking to transcribe audio without needing to benchmark different underlying technologies.

speech-recognition natural-language-processing voice-ai model-evaluation software-development
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

683

Forks

73

Language

Python

License

Apache-2.0

Last pushed

Mar 03, 2026

Commits (30d)

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/Picovoice/speech-to-text-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.