xie-lab-ml/IV-mixed-Sampler
[ICLR2025] IV-Mixed Sampler: Leveraging Image Diffusion Models for Enhanced Video Synthesis
This tool helps video creators and AI artists generate higher quality videos from text prompts. It takes your existing text-to-video diffusion model (like AnimateDiff, ModelScope, or VideoCrafter) and a text description, then produces a video with significantly improved visual quality while maintaining smooth motion. It's designed for creative professionals who want state-of-the-art video generation.
No commits in the last 6 months.
Use this if you are a video producer or content creator using AI diffusion models and want to achieve superior visual fidelity in your generated videos, moving closer to the quality of proprietary tools like Pika-2.0.
Not ideal if you are looking for a completely new video generation model from scratch, as this enhances existing ones rather than replacing them, or if you do not have access to a powerful GPU.
Stars
39
Forks
1
Language
Python
License
—
Category
Last pushed
Feb 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xie-lab-ml/IV-mixed-Sampler"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators