parlance-zz/dualdiffusion
Dual Diffusion is a generative diffusion model for music trained on video game soundtracks.
This project helps music producers, sound designers, and content creators generate original music, specifically in the style of video game soundtracks. You provide descriptive text or audio examples as input, and it generates unique musical compositions. It's designed for individuals looking to create bespoke background music, intros, or atmospheric tracks without needing extensive musical composition skills.
Use this if you need to quickly generate royalty-free, video game-style music for your projects and want creative control over the output through descriptive text.
Not ideal if you require precise control over every musical element for complex compositions or need music outside the video game soundtrack genre.
Stars
90
Forks
4
Language
Python
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/parlance-zz/dualdiffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model