diffusers and audio-diffusion-pytorch

Diffusers is a general-purpose diffusion framework that audio-diffusion-pytorch builds upon, making them complements rather than competitors—the latter provides specialized audio generation implementations compatible with the former's architecture.

diffusers
87
Verified
audio-diffusion-pytorch
55
Established
Maintenance 22/25
Adoption 15/25
Maturity 25/25
Community 25/25
Maintenance 0/25
Adoption 11/25
Maturity 25/25
Community 19/25
Stars: 33,029
Forks: 6,832
Downloads:
Commits (30d): 85
Language: Python
License: Apache-2.0
Stars: 2,094
Forks: 178
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m

About diffusers

huggingface/diffusers

🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

This library helps developers and researchers create or use AI models that generate new images, audio, or even molecular structures. You provide text descriptions or existing data, and it outputs novel visual, auditory, or structural content. It's designed for machine learning practitioners and AI artists.

AI-art-generation synthetic-media AI-research computational-chemistry

About audio-diffusion-pytorch

archinetai/audio-diffusion-pytorch

Audio generation using diffusion models, in PyTorch.

This is a comprehensive toolkit for anyone working with audio generation using advanced AI models. You can create new audio from scratch, generate audio based on text descriptions, or enhance existing low-quality audio. It takes in audio waveforms or text prompts and outputs high-quality, synthesized audio, making it useful for sound designers, music producers, or researchers in audio AI.

audio-synthesis sound-design music-generation audio-upsampling text-to-audio

Scores updated daily from GitHub, PyPI, and npm data. How scores work