WHU-USI3DV/VistaDream
[ICCV 2025] VistaDream: Sampling multiview consistent images for single-view scene reconstruction
This project helps you reconstruct a high-quality 3D scene from just a single photograph or a few sparse images. You provide an input image, and it generates multiple consistent views and depth maps, effectively building a virtual 3D representation of the scene. This is ideal for 3D artists, game developers, or architects looking to quickly create 3D models from existing images.
528 stars. No commits in the last 6 months.
Use this if you need to generate a realistic 3D scene from a single 2D image without extensive manual modeling or photography.
Not ideal if you require highly precise, measured 3D models for engineering or manufacturing, as this focuses on visual consistency rather than exact geometric accuracy.
Stars
528
Forks
25
Language
Python
License
MIT
Category
Last pushed
Jul 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/WHU-USI3DV/VistaDream"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...