diffusers and stable-diffusion-videos

Diffusers is a foundational framework that provides the core diffusion model implementations and pipelines, while Stable Diffusion Videos builds on top of it to add specialized video generation capabilities through latent space interpolation between prompts.

diffusers
87
Verified
stable-diffusion-videos
61
Established
Maintenance 22/25
Adoption 15/25
Maturity 25/25
Community 25/25
Maintenance 6/25
Adoption 10/25
Maturity 25/25
Community 20/25
Stars: 33,029
Forks: 6,832
Downloads:
Commits (30d): 85
Language: Python
License: Apache-2.0
Stars: 4,671
Forks: 449
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
No risk flags

About diffusers

huggingface/diffusers

🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

This library helps developers and researchers create or use AI models that generate new images, audio, or even molecular structures. You provide text descriptions or existing data, and it outputs novel visual, auditory, or structural content. It's designed for machine learning practitioners and AI artists.

AI-art-generation synthetic-media AI-research computational-chemistry

About stable-diffusion-videos

nateraw/stable-diffusion-videos

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts

This tool helps content creators and digital artists generate dynamic, AI-powered videos by smoothly transitioning between different text descriptions. You input a series of creative text prompts and optional audio, and it outputs a unique video where the visuals morph from one concept to the next, sometimes even synchronized to music. It's ideal for anyone looking to quickly produce imaginative visual content without traditional animation skills.

digital-art content-creation music-video-production generative-media

Scores updated daily from GitHub, PyPI, and npm data. How scores work