SUDO-AI-3D/zero123plus
Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
This project helps 3D artists, product designers, or e-commerce businesses generate consistent multi-view images of an object from just a single input picture. You input one image, and it outputs a set of images showing the object from various angles, ready for use in 3D modeling or visual presentations. This is for anyone needing to create 3D representations or detailed product showcases from limited 2D imagery.
2,021 stars. No commits in the last 6 months.
Use this if you need to quickly generate multiple consistent views of an object for 3D reconstruction, product visualization, or virtual try-on applications from only one existing photograph.
Not ideal if you need to perform commercial 3D generation using the model itself, as the model weights are restricted for non-commercial use.
Stars
2,021
Forks
138
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/SUDO-AI-3D/zero123plus"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...