sunprinceS/MetaASR-CrossAccent

Meta-Learning for End-to-End ASR

28
/ 100
Experimental

This project helps speech researchers and machine learning engineers pre-train speech recognition models that can adapt to different accents with limited data. It takes audio data and corresponding transcripts for various accents as input, and outputs a pre-trained model capable of recognizing speech across those accents more effectively than standard models. This is for researchers and practitioners working on improving automatic speech recognition (ASR) systems.

No commits in the last 6 months.

Use this if you are a speech researcher or machine learning engineer focused on building robust ASR systems that perform well across multiple accents, especially in scenarios with limited data for each accent.

Not ideal if you need a ready-to-use ASR application or if your primary goal is general speech recognition without a specific focus on cross-accent adaptation with meta-learning techniques.

speech-recognition accent-adaptation low-resource-speech natural-language-processing audio-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Aug 08, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/sunprinceS/MetaASR-CrossAccent"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.