deepanwadhwa/nanogpt-Audio

An experimental nanogpt fork that learns to speak Shakespeare by modeling EnCodec audio tokens.

24
/ 100
Experimental

This project helps audio engineers and researchers experiment with training transformer models on raw audio. You provide text, which is converted to spoken audio, and then this audio is processed into tokens. The system then learns to generate new speech patterns from these audio tokens, effectively creating speech in a learned style.

Use this if you are an audio researcher or sound designer interested in exploring generative audio models specifically trained on spoken language.

Not ideal if you need a production-ready text-to-speech system or a tool to generate audio from diverse input styles beyond a single learned voice.

audio-synthesis speech-generation audio-modeling sound-research generative-audio
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

MIT

Last pushed

Dec 31, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/deepanwadhwa/nanogpt-Audio"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.