menyifang/MIMO
Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"
This tool helps animators and content creators generate realistic character videos from simple inputs. You provide an image of a character, along with motion data (like a 3D pose or another video), and it outputs a new video of your character performing that motion within a scene. This is ideal for professionals in animation, game development, or digital content creation who need to quickly prototype character movements.
1,575 stars. No commits in the last 6 months.
Use this if you need to animate a static character image with custom movements or integrate a character into a dynamic scene, without extensive manual rigging or frame-by-frame animation.
Not ideal if you need to generate highly precise, physics-based simulations or require pixel-level control over every aspect of a character's interaction with complex environments.
Stars
1,575
Forks
70
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/menyifang/MIMO"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators