open-mmlab/Live2Diff
Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.
This tool helps creative professionals transform live video streams or existing video files into stylized versions in real-time. You feed in a standard video, and it outputs a new video where the content has been re-rendered in a completely different artistic style, such as an anime character or a Disney Pixar aesthetic. This is ideal for artists, content creators, or marketers looking to generate unique visual content instantly.
200 stars. No commits in the last 6 months.
Use this if you need to stylize live video feeds or existing video footage into different artistic styles quickly and efficiently.
Not ideal if you require frame-perfect, high-fidelity video editing or object-specific transformations beyond overall stylistic changes.
Stars
200
Forks
19
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/open-mmlab/Live2Diff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python