Haoke98/FrameDiffusion
A frame2frame, video2video Video Editor based on the stable-diffusion
This tool helps video editors and content creators transform existing video footage into new, stylized visual content. You input a source video, and the system processes it frame-by-frame using AI to output a modified video, allowing for creative effects and visual changes. This is ideal for those looking to refresh content or apply unique artistic styles to their video projects.
No commits in the last 6 months.
Use this if you want to creatively re-imagine or stylize an existing video by applying AI-powered transformations to its visual content.
Not ideal if you need to create a video from scratch using only text prompts or if you require precise, traditional video editing functionalities like cutting, splicing, or adding sound.
Stars
10
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Haoke98/FrameDiffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python