iCVTEAM/IPSM
How to Use Diffusion Priors under Sparse Views? (NeurIPS 2024)
This project helps 3D content creators and researchers generate detailed 3D scenes from a very limited number of input images, even as few as three. It takes a few sparse images and an initial 3D point cloud (from tools like COLMAP) as input, then produces a high-fidelity 3D representation of the scene. It is ideal for anyone working with 3D reconstruction and scene generation.
No commits in the last 6 months.
Use this if you need to create realistic 3D models or scenes but only have a handful of photographs (sparse views) of the real-world object or environment.
Not ideal if you already have dense, high-quality image sets or lidar scans for 3D reconstruction, or if you are looking for a fully automated, one-click solution without any technical setup.
Stars
34
Forks
4
Language
Python
License
—
Category
Last pushed
Dec 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/iCVTEAM/IPSM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...