huanngzh/EpiDiff
[CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
This project helps generate realistic new views of an object from just a few existing images. You provide a set of input images captured from different angles around an object, and it produces a high-quality, consistent new image of that object from a desired, unseen viewpoint. It's designed for 3D artists, game developers, or anyone working with virtual objects and needing to create diverse visual representations.
138 stars. No commits in the last 6 months.
Use this if you need to create convincing new views of 3D objects from a limited number of input images, especially when aiming for photorealistic results.
Not ideal if your primary goal is real-time rendering or if you only have a single input image and expect complex 3D reconstruction without additional data.
Stars
138
Forks
10
Language
Python
License
MIT
Category
Last pushed
Aug 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/huanngzh/EpiDiff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...