acids-ircam/RAVE
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
RAVE is a tool for musicians, sound designers, and artists that allows for real-time manipulation and synthesis of audio using neural networks. You can feed it audio recordings, then generate new sounds, transform existing ones, or apply stylistic changes in real-time, like a high-quality voice changer or an advanced synthesizer. This is ideal for those working with digital audio workstations (DAWs) or live performance setups looking to integrate AI-powered sound generation.
1,698 stars.
Use this if you want to create unique neural audio synthesis, perform real-time sound transformations, or experiment with AI-driven sound design within a music production or live performance environment.
Not ideal if you are looking for a simple audio editor or traditional sound effects processor without an interest in neural network-based sound manipulation.
Stars
1,698
Forks
218
Language
Python
License
—
Last pushed
Mar 07, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/acids-ircam/RAVE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
Naresh1318/Adversarial_Autoencoder
A wizard's guide to Adversarial Autoencoders
mseitzer/pytorch-fid
Compute FID scores with PyTorch.
ratschlab/aestetik
AESTETIK: Convolutional autoencoder for learning spot representations from spatial...
jaanli/variational-autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
nathanhubens/Autoencoders
Implementation of simple autoencoders networks with Keras