kan-bayashi/ParallelWaveGAN

Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch

51
/ 100
Established

This project helps developers working on advanced speech synthesis to create natural-sounding spoken audio and even singing voices in real-time. It takes linguistic features or acoustic representations (like Mel spectrograms) and converts them into high-quality raw audio waveforms. The ideal user is a machine learning engineer or researcher focused on building custom text-to-speech (TTS) or singing voice synthesis (SVS) systems.

1,637 stars. No commits in the last 6 months.

Use this if you need to integrate a state-of-the-art neural vocoder into a text-to-speech or singing voice synthesis system that requires high-quality, real-time audio output.

Not ideal if you are looking for a ready-to-use application to convert text to speech without any coding or model integration.

speech-synthesis text-to-speech singing-voice-synthesis audio-generation voice-cloning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

1,637

Forks

352

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/kan-bayashi/ParallelWaveGAN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.