masaishi/VidDiffusion
Python OSS library that provides vid2vid pipeline by using Hugging Face's diffusers.
VidDiffusion helps you transform ordinary video footage into stylized artistic creations. You provide an existing video and define the desired style using text prompts, and the tool generates a new video with that artistic aesthetic applied. This is ideal for content creators, artists, or marketers looking to add unique visual flair to their video projects without complex editing software.
No commits in the last 6 months. Available on PyPI.
Use this if you want to quickly re-imagine your videos in different artistic styles, like cyberpunk or anime, using AI.
Not ideal if you need precise frame-by-frame control over video elements or are looking for traditional video editing features.
Stars
7
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 24, 2023
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/masaishi/VidDiffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
siliconflow/onediff
OneDiff: An out-of-the-box acceleration library for diffusion models.
wooyeolbaek/attention-map-diffusers
🚀 Cross attention map tools for huggingface/diffusers
jina-ai/discoart
🪩 Create Disco Diffusion artworks in one line
chengzeyi/stable-fast
https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace...
hkproj/pytorch-stable-diffusion
Stable Diffusion implemented from scratch in PyTorch