lunarring/latentblending
Create butter-smooth transitions between prompts, powered by stable diffusion
This project helps video creators and digital artists generate smooth, visually engaging transitions between different conceptual scenes in videos. You provide text descriptions of your starting and ending scenes, and it produces a video clip that seamlessly morphs from one to the other. It's ideal for visual storytellers, motion graphics designers, and content creators looking to add dynamic, AI-generated visual effects to their work.
367 stars. No commits in the last 6 months.
Use this if you want to create highly customized, fluid video transitions between distinct imagery described by text prompts, powered by stable diffusion.
Not ideal if you need to perform traditional video editing tasks like cutting, splicing, or adding standard dissolves between existing video footage.
Stars
367
Forks
29
Language
Python
License
BSD-3-Clause
Category
Last pushed
Mar 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/lunarring/latentblending"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python