r9y9/jsut-lab
HTS-style full-context labels for JSUT v1.1
This provides detailed Japanese speech alignment files for the JSUT corpus, enabling speech researchers to develop and evaluate text-to-speech (TTS) and voice conversion systems. It takes raw JSUT audio and text data and outputs phonetic and prosodic labels with precise timing information. Speech scientists and engineers working on Japanese language synthesis or analysis would use these labels.
No commits in the last 6 months.
Use this if you are a speech researcher building or evaluating text-to-speech or voice conversion models for Japanese and need pre-processed, HTS-style labels for the JSUT corpus.
Not ideal if you require perfectly accurate, hand-annotated labels, as these are automatically generated and may contain errors.
Stars
51
Forks
2
Language
—
License
MIT
Category
Last pushed
Apr 16, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/r9y9/jsut-lab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ynop/audiomate
Python library for handling audio datasets.
reazon-research/ReazonSpeech
Massive open Japanese speech corpus
common-voice/cv-dataset
Metadata and versioning details for the Common Voice dataset
davidmartinrius/speech-dataset-generator
🔊 Create labeled datasets, enhance audio quality, identify speakers, support diverse dataset...
EgorLakomkin/KTSpeechCrawler
Automatically constructing corpus for automatic speech recognition from YouTube videos