zhizdev/sparsefusion

[CVPR 2023] SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction

29
/ 100
Experimental

SparseFusion helps 3D artists, designers, and engineers reconstruct detailed 3D models from a very limited number of real-world images. You provide 2 or more photographs of an object from different angles, along with their relative camera positions. The project then generates a complete, realistic 3D neural scene representation, filling in unobserved or complex areas with plausible detail.

378 stars. No commits in the last 6 months.

Use this if you need to create accurate 3D models of objects using only a few input photographs and want to generate realistic details for missing or uncertain parts of the object.

Not ideal if you require extremely high precision for engineering or measurement applications where every millimetre of the reconstructed model must be geometrically exact based purely on sensor data.

3D-reconstruction computer-vision photogrammetry digital-asset-creation virtual-reality-asset-generation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

378

Forks

17

Language

Python

License

Last pushed

Apr 11, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/zhizdev/sparsefusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.