philippwulff/DGD-NeRF

Depth-Guided Dynamic Neural Radiance Field using RGB-D data

14
/ 100
Experimental

This project helps 3D content creators, VFX artists, and researchers generate realistic 3D videos from different angles of dynamic, moving scenes. It takes a series of standard color (RGB) images paired with depth information (like from a LiDAR sensor) as input. The output is a video showing the scene from any desired viewpoint and at any point in time, even if those specific views weren't captured initially.

No commits in the last 6 months.

Use this if you need to create compelling 3D visualizations or animations of non-rigidly moving objects and people from limited input video footage.

Not ideal if you only have standard 2D video without depth data or if your scenes involve static objects without any movement.

3D-reconstruction visual-effects animation computer-vision volumetric-capture
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Jupyter Notebook

License

Last pushed

Apr 04, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/philippwulff/DGD-NeRF"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.