zhizdev/mvdfusion
[CVPR 2024] MVD-Fusion: Single-view 3D via Depth-consistent Multi-view Generation
This tool helps 3D artists, game developers, or e-commerce product photographers quickly create diverse 3D representations from a single image. You input one RGB image of an object, and it generates multiple new views of that object, each with an associated depth map, simulating how it would look from different angles. This is ideal for professionals needing to visualize or render objects in 3D without extensive modeling.
131 stars. No commits in the last 6 months.
Use this if you need to generate multiple realistic 3D views and corresponding depth information from just one input image for modeling or visualization purposes.
Not ideal if you require highly precise, measured 3D models for engineering or manufacturing, as this tool focuses on visual generation rather than exact reconstruction.
Stars
131
Forks
6
Language
Python
License
MIT
Category
Last pushed
Apr 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/zhizdev/mvdfusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...