iCVTEAM/IPSM

How to Use Diffusion Priors under Sparse Views? (NeurIPS 2024)

34
/ 100
Emerging

This project helps 3D content creators and researchers generate detailed 3D scenes from a very limited number of input images, even as few as three. It takes a few sparse images and an initial 3D point cloud (from tools like COLMAP) as input, then produces a high-fidelity 3D representation of the scene. It is ideal for anyone working with 3D reconstruction and scene generation.

No commits in the last 6 months.

Use this if you need to create realistic 3D models or scenes but only have a handful of photographs (sparse views) of the real-world object or environment.

Not ideal if you already have dense, high-quality image sets or lidar scans for 3D reconstruction, or if you are looking for a fully automated, one-click solution without any technical setup.

3D-reconstruction computer-vision scene-generation 3D-modeling digital-twin
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

34

Forks

4

Language

Python

License

Last pushed

Dec 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/iCVTEAM/IPSM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.