mravanelli/pytorch_MLP_for_ASR

This code implements a basic MLP for speech recognition. The MLP is trained with pytorch, while feature extraction, alignments, and decoding are performed with Kaldi. The current implementation supports dropout and batch normalization. An example for phoneme recognition using the standard TIMIT dataset is provided.

33
/ 100
Emerging

This project helps speech recognition researchers and engineers develop and evaluate acoustic models for HMM-DNN speech recognition systems. It takes speech features and alignments generated by Kaldi and uses them to train a Multi-Layer Perceptron (MLP) acoustic model with PyTorch. The output is a trained MLP model that can be integrated into a larger speech recognition pipeline for tasks like phoneme recognition.

No commits in the last 6 months.

Use this if you are a speech recognition researcher or engineer looking to train a basic MLP acoustic model using PyTorch, with feature extraction and decoding handled by Kaldi.

Not ideal if you are looking for a complete, end-to-end speech recognition solution that does not require prior Kaldi expertise or existing Kaldi-generated features and alignments.

speech-recognition acoustic-modeling phoneme-recognition deep-learning-for-speech ASR-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

40

Forks

13

Language

Perl

License

Last pushed

Feb 10, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/mravanelli/pytorch_MLP_for_ASR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.