TeaPoly/CTC-OptimizedLoss

Computes the MWER (minimum WER) Loss with CTC beam search. Knowledge distillation for CTC loss.

32
/ 100
Emerging

This project provides specialized loss functions for Connectionist Temporal Classification (CTC) models, which are commonly used in speech recognition. It takes raw CTC outputs (logits) and target labels, then calculates an optimized loss that can improve the accuracy of speech-to-text systems. Speech recognition engineers and researchers would use this to fine-tune their CTC-based models.

No commits in the last 6 months.

Use this if you are developing or training a speech recognition model using CTC and want to apply advanced loss optimization techniques like Minimum Word Error Rate (MWER) or knowledge distillation to improve performance.

Not ideal if you are a general practitioner looking for an out-of-the-box speech recognition solution or if you are not working directly with CTC model training.

speech-recognition automatic-speech-recognition deep-learning-training CTC-models natural-language-processing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

59

Forks

11

Language

Python

License

Last pushed

Sep 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/TeaPoly/CTC-OptimizedLoss"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.