albertotrunk/depth2video
stable diffusion V2 depth2video - animation - coherence
This project helps animators, digital artists, and video content creators transform existing video footage into new, stylized animations. You provide an original video and its corresponding depth mask (showing what's close and far in the scene). The tool then generates a new video where the content is re-imagined with a consistent visual style, maintaining the original motion and depth.
No commits in the last 6 months.
Use this if you want to creatively re-style a video while preserving its scene depth and motion, generating unique animated content from existing footage.
Not ideal if you need to create entirely new video content from scratch or perform detailed frame-by-frame editing.
Stars
47
Forks
5
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Apr 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/albertotrunk/depth2video"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python