zhizdev/sparsefusion
[CVPR 2023] SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction
SparseFusion helps 3D artists, designers, and engineers reconstruct detailed 3D models from a very limited number of real-world images. You provide 2 or more photographs of an object from different angles, along with their relative camera positions. The project then generates a complete, realistic 3D neural scene representation, filling in unobserved or complex areas with plausible detail.
378 stars. No commits in the last 6 months.
Use this if you need to create accurate 3D models of objects using only a few input photographs and want to generate realistic details for missing or uncertain parts of the object.
Not ideal if you require extremely high precision for engineering or measurement applications where every millimetre of the reconstructed model must be geometrically exact based purely on sensor data.
Stars
378
Forks
17
Language
Python
License
—
Category
Last pushed
Apr 11, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/zhizdev/sparsefusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...