SimonGiebenhain/MonoNPHM
[CVPR 2024 Highlight] MonoNPHM: Dynamic Head Reconstruction from Monoculuar Videos
This project helps create detailed, animated 3D models of human heads from standard video footage. You input a monocular video of a person's head, and it outputs a dynamic 3D reconstruction showing their expressions and movements. It's designed for researchers and professionals in computer graphics, animation, or virtual reality who need high-fidelity digital avatars or facial performance capture.
180 stars. No commits in the last 6 months.
Use this if you need to generate realistic, dynamic 3D head models from readily available single-camera video recordings.
Not ideal if you need a quick, user-friendly tool for general 3D scanning or if your primary interest is full-body reconstruction.
Stars
180
Forks
25
Language
Python
License
—
Category
Last pushed
Nov 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/SimonGiebenhain/MonoNPHM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vita-epfl/monoloco
A 3D vision library from 2D keypoints: monocular and stereo 3D detection for humans, social...
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and...
nburrus/stereodemo
Small Python utility to compare and visualize the output of various stereo depth estimation algorithms
JiawangBian/sc_depth_pl
SC-Depth (V1, V2, and V3) for Unsupervised Monocular Depth Estimation ...
wvangansbeke/Sparse-Depth-Completion
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st...