nv-tlabs/diffusion-renderer

[CVPR'25 Oral] Official implementation for "DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models"

39
/ 100
Emerging

This project helps 3D artists, game developers, or visual effects professionals analyze real-world video footage to understand its underlying 3D geometry and material properties. You provide a video, and it outputs crucial scene information like surface colors, depth, and how light interacts with materials. This enables realistic re-lighting and material editing for computer graphics and virtual production workflows.

352 stars. No commits in the last 6 months.

Use this if you need to extract detailed 3D scene information from video to realistically relight scenes, change materials, or integrate real-world elements into virtual environments without traditional 3D modeling.

Not ideal if you require highly precise, physically-based rendering simulations that demand explicit geometry and ray-tracing, as this tool provides a data-driven approximation.

3D-rendering video-processing visual-effects virtual-production material-editing
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 12 / 25

How are scores calculated?

Stars

352

Forks

19

Language

Python

License

Last pushed

Jun 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/nv-tlabs/diffusion-renderer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.