zhenye234/LLaSA_training

LLaSA: Scaling Train-time and Inference-time Compute for LLaMA-based Speech Synthesis

52
/ 100
Established

This project helps developers train advanced text-to-speech (TTS) models, specifically LLaMA-based speech synthesizers, more efficiently. It takes large datasets of tokenized speech and text data as input, processing them to produce a trained model capable of generating high-quality, natural-sounding speech from text. This tool is designed for AI/ML engineers and researchers specializing in speech technology and natural language processing.

659 stars.

Use this if you are an AI/ML developer or researcher looking to fine-tune or train LLaMA-based speech synthesis models for robust, high-performance voice generation applications.

Not ideal if you are an end-user simply looking for a ready-to-use text-to-speech tool without deep technical involvement in model training.

speech-synthesis text-to-speech LLM-fine-tuning AI-model-training natural-language-processing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

659

Forks

52

Language

Python

License

Last pushed

Jan 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zhenye234/LLaSA_training"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.