nv-tlabs/diffusion-renderer
[CVPR'25 Oral] Official implementation for "DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models"
This project helps 3D artists, game developers, or visual effects professionals analyze real-world video footage to understand its underlying 3D geometry and material properties. You provide a video, and it outputs crucial scene information like surface colors, depth, and how light interacts with materials. This enables realistic re-lighting and material editing for computer graphics and virtual production workflows.
352 stars. No commits in the last 6 months.
Use this if you need to extract detailed 3D scene information from video to realistically relight scenes, change materials, or integrate real-world elements into virtual environments without traditional 3D modeling.
Not ideal if you require highly precise, physically-based rendering simulations that demand explicit geometry and ray-tracing, as this tool provides a data-driven approximation.
Stars
352
Forks
19
Language
Python
License
—
Category
Last pushed
Jun 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/nv-tlabs/diffusion-renderer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PRIS-CV/DemoFusion
Let us democratise high-resolution generation! (CVPR 2024)
mit-han-lab/distrifuser
[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
Tencent-Hunyuan/HunyuanPortrait
[CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced...
giuvecchio/matfuse
MatFuse: Controllable Material Generation with Diffusion Models (CVPR2024)
Shilin-LU/TF-ICON
[ICCV 2023] "TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition" (Official...