philippwulff/DGD-NeRF
Depth-Guided Dynamic Neural Radiance Field using RGB-D data
This project helps 3D content creators, VFX artists, and researchers generate realistic 3D videos from different angles of dynamic, moving scenes. It takes a series of standard color (RGB) images paired with depth information (like from a LiDAR sensor) as input. The output is a video showing the scene from any desired viewpoint and at any point in time, even if those specific views weren't captured initially.
No commits in the last 6 months.
Use this if you need to create compelling 3D visualizations or animations of non-rigidly moving objects and people from limited input video footage.
Not ideal if you only have standard 2D video without depth data or if your scenes involve static objects without any movement.
Stars
16
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Apr 04, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/philippwulff/DGD-NeRF"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
rundiwu/DeepCAD
code for our ICCV 2021 paper "DeepCAD: A Deep Generative Network for Computer-Aided Design Models"
XingangPan/GAN2Shape
Code for GAN2Shape (ICLR2021 oral)
ayaanzhaque/instruct-nerf2nerf
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (ICCV 2023)
compphoto/Intrinsic
Repo for the papers "Intrinsic Image Decomposition via Ordinal Shading" (TOG 2023) and "Colorful...
mworchel/differentiable-shadow-mapping
Differentiable Shadow Mapping for Efficient Inverse Graphics (CVPR 2023)