GAP-LAB-CUHK-SZ/REC-MV
[CVPR2023] REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos
REC-MV helps fashion designers, game developers, or filmmakers digitize dynamic clothing from standard video footage. It takes a monocular video of a person wearing a garment as input and produces a detailed 3D mesh reconstruction of the dynamic cloth, allowing for realistic simulations and animations. This tool is ideal for professionals needing to create 3D assets of clothing without expensive specialized equipment.
282 stars. No commits in the last 6 months.
Use this if you need to create high-quality 3D models of clothing as it moves, using only standard video recordings.
Not ideal if you only need static 3D models of clothing or already have access to specialized 3D scanning equipment.
Stars
282
Forks
11
Language
Python
License
MIT
Category
Last pushed
Aug 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/GAP-LAB-CUHK-SZ/REC-MV"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vita-epfl/monoloco
A 3D vision library from 2D keypoints: monocular and stereo 3D detection for humans, social...
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and...
nburrus/stereodemo
Small Python utility to compare and visualize the output of various stereo depth estimation algorithms
JiawangBian/sc_depth_pl
SC-Depth (V1, V2, and V3) for Unsupervised Monocular Depth Estimation ...
wvangansbeke/Sparse-Depth-Completion
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st...