RQ-Wu/LAMP
[CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generation
This project helps video creators, marketers, or artists generate new videos with specific motions using minimal examples. You provide a few short videos demonstrating a desired motion (like a horse running or a person smiling) and a text description, and it produces new videos applying that motion to different subjects or scenes. The ideal user is someone who needs to create custom video content but lacks extensive motion capture data or wants to iterate on motion styles quickly.
283 stars. No commits in the last 6 months.
Use this if you need to generate short video clips with a particular learned motion, using only a handful of example videos and a text prompt to guide the output.
Not ideal if you need to generate very long videos, precise control over individual frames, or complex, multi-stage actions beyond a single learned motion pattern.
Stars
283
Forks
13
Language
Python
License
—
Category
Last pushed
Apr 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/RQ-Wu/LAMP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators