ubc-vision/vivid123
[CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models
This project helps 3D artists, game developers, or product designers create consistent 360-degree views of an object from just a single input image. You provide one picture of an object, and it generates a video showing that object from various new angles, maintaining a smooth appearance. This is ideal for quickly generating turnaround animations or diverse product shots.
179 stars. No commits in the last 6 months.
Use this if you need to generate multiple, consistent views of an object from a single image for presentations, virtual environments, or product showcases.
Not ideal if you need to reconstruct a 3D model with precise geometric accuracy, as this focuses on generating realistic video views rather than detailed mesh data.
Stars
179
Forks
9
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ubc-vision/vivid123"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...