KyujinHan/Tune-A-VideKO
한국어 기반 One-shot video tuning with Stable Diffusion
This tool helps content creators and marketers quickly transform existing video footage by changing elements within the scene. You input a short video and a Korean text description of what you want to see, and it generates a new video reflecting your instructions while preserving the original motion. This is perfect for anyone creating video content for Korean-speaking audiences who needs to rapidly prototype different visual styles or subjects.
No commits in the last 6 months.
Use this if you need to quickly re-imagine or adapt existing video clips with new subjects, styles, or environments based on Korean text prompts.
Not ideal if you need precise control over every pixel, are generating entirely new video sequences from scratch, or require non-Korean text input.
Stars
11
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Aug 18, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/KyujinHan/Tune-A-VideKO"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jolibrain/joliGEN
Generative AI Image and Video Toolset with GANs and Diffusion for Real-World Applications
zhangmozhe/Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
naver-ai/StyleKeeper
Official Pytorch implementation of "StyleKeeper: Prevent Content Leakage using Negative Visual...
un1tz3r0/finetunepixelartdiffusion
Fine tune a pixelart diffusion model with isometric dataset.
lixiaowen-xw/DiffuEraser
DiffuEraser is a diffusion model for video inpainting, which performs great content completeness...