khanld/Wav2vec2-Pretraining
Wav2vec 2.0 Self-Supervised Pretraining
This project helps machine learning engineers or researchers adapt the powerful Wav2vec 2.0 model for specialized audio tasks. You provide your custom audio datasets, and it trains a base model that understands the unique sound patterns in your data. This model can then be used as a starting point for developing custom speech recognition, speaker identification, or other audio processing applications.
No commits in the last 6 months.
Use this if you need to train a robust audio understanding model on your specific collection of spoken language or environmental sounds, where existing public models might not perform optimally.
Not ideal if you're looking for an out-of-the-box solution for common audio tasks without needing to train a custom model.
Stars
59
Forks
10
Language
Python
License
—
Category
Last pushed
Feb 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/khanld/Wav2vec2-Pretraining"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
liangstein/Chinese-speech-to-text
Chinese Speech To Text Using Wavenet
louiskirsch/speechT
An opensource speech-to-text software written in tensorflow
Open-Speech-EkStep/vakyansh-models
Open source speech to text models for Indic Languages
oliverguhr/wav2vec2-live
A live speech recognition using Facebooks wav2vec 2.0 model.
Open-Speech-EkStep/vakyansh-wav2vec2-experimentation
Repository containing experimentation platform on how to train, infer on wav2vec2 models.