zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
This project helps you convert written text into natural-sounding speech, even replicating specific voices and emotions. You provide text and, optionally, a short audio sample of a voice you want to clone, and it generates high-quality, expressive audio. This is useful for content creators, educators, audiobook producers, or anyone needing realistic spoken audio from text.
949 stars.
Use this if you need to generate human-like speech from text, clone specific voices from short audio clips, and control the emotional tone of the generated speech.
Not ideal if you need a simple, no-frills text-to-speech solution without advanced voice cloning or emotion control capabilities.
Stars
949
Forks
114
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zai-org/GLM-TTS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model
EvolvingLMMs-Lab/NEO
NEO Series: Native Vision-Language Models from First Principles