Karbo123/RGBD-Diffusion
RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models
This project helps create detailed, new virtual environments or scenes. You provide existing 3D scans (like those from a 3D scanner that capture both color and depth), and it generates new, continuous views as if a camera were moving through the scene. This is useful for researchers and developers working on virtual reality, robotics, or synthetic data generation for computer vision.
No commits in the last 6 months.
Use this if you need to generate realistic, novel 3D scenes or extend existing 3D scans with new camera perspectives and detailed depth information.
Not ideal if you are looking for a simple drag-and-drop tool for non-technical users, or if you don't have existing 3D scan data to work with.
Stars
99
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 17, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Karbo123/RGBD-Diffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...