Tele-AI/OmniVDiff
Omni Controllable Video Diffusion
This project helps video creators, animators, and researchers generate and understand video content with precise control. You can input various conditions like existing video frames, depth maps, or segmentation masks, and it produces new video sequences that adhere to these controls. The tool is designed for professionals working with video synthesis and analysis.
Use this if you need to generate high-quality videos and precisely control elements like motion, depth, or specific object placement within the scene.
Not ideal if you are looking for a simple, one-click video generator without needing fine-grained control or the ability to provide detailed conditioning inputs.
Stars
42
Forks
2
Language
Python
License
MIT
Category
Last pushed
Dec 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Tele-AI/OmniVDiff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lixinustc/Awesome-diffusion-model-for-image-processing
one summary of diffusion-based image processing, including restoration, enhancement, coding,...
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, and various other applications.
xlite-dev/Awesome-DiT-Inference
📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization,...
wangkai930418/awesome-diffusion-categorized
collection of diffusion model papers categorized by their subareas
ChenHsing/Awesome-Video-Diffusion-Models
[CSUR] A Survey on Video Diffusion Models