synthesiaresearch/humanrf
Official code for "HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion"
This project helps 3D artists, animators, and researchers generate highly realistic 3D models of humans in motion from video footage. It takes multi-camera video, masks, and calibration data as input to produce high-fidelity neural radiance fields. The primary users are professionals in computer graphics, visual effects, and academic research who need to create or analyze dynamic human forms.
493 stars. No commits in the last 6 months.
Use this if you need to create extremely lifelike digital representations of moving people for visual effects, virtual reality, or advanced animation projects.
Not ideal if you only need static 3D models or are working with limited computational resources, as it's designed for high-fidelity dynamic capture.
Stars
493
Forks
30
Language
Python
License
—
Category
Last pushed
Sep 17, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/synthesiaresearch/humanrf"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
weiyithu/NerfingMVS
[ICCV 2021 Oral] NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo
Jumpat/SegmentAnythingin3D
Segment Anything in 3D with NeRFs (NeurIPS 2023 & IJCV 2025)
POSTECH-CVLab/SCNeRF
[ICCV21] Self-Calibrating Neural Radiance Fields
half-potato/nmf
Our method takes as input a collection of images (100 in our experiments) with known cameras,...
HankYe/PAGCP
[T-PAMI'23] PAGCP for the compression of YOLOv5