NilsDem/control-transfer-diffusion

Repository for the paper "Combining audio control and style transfer using latent diffusion", accepted at ISMIR 2024

38
/ 100
Emerging

This project helps musicians, producers, and sound designers create new audio by blending the sound characteristics (timbre, texture) of one audio sample with the melodic or rhythmic structure from another input, which can be either an audio file or a MIDI sequence. You provide two distinct audio or MIDI inputs, and it generates a unique audio output that combines elements from both, allowing for creative audio transformations. This tool is ideal for anyone looking to experiment with sound design and music generation.

No commits in the last 6 months.

Use this if you want to generate novel audio by taking the 'feel' or 'instrument sound' from one audio track and applying it to the 'melody' or 'rhythm' of another audio track or a MIDI file.

Not ideal if you need a simple audio editor for tasks like cutting, splicing, or applying standard effects; this is for generative sound transformation.

music-production sound-design audio-synthesis generative-music audio-transformation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

63

Forks

9

Language

Jupyter Notebook

License

Last pushed

Feb 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/NilsDem/control-transfer-diffusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.