huanngzh/EpiDiff

[CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion

37
/ 100
Emerging

This project helps generate realistic new views of an object from just a few existing images. You provide a set of input images captured from different angles around an object, and it produces a high-quality, consistent new image of that object from a desired, unseen viewpoint. It's designed for 3D artists, game developers, or anyone working with virtual objects and needing to create diverse visual representations.

138 stars. No commits in the last 6 months.

Use this if you need to create convincing new views of 3D objects from a limited number of input images, especially when aiming for photorealistic results.

Not ideal if your primary goal is real-time rendering or if you only have a single input image and expect complex 3D reconstruction without additional data.

3D-modeling virtual-photography game-asset-creation computer-graphics product-visualization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

138

Forks

10

Language

Python

License

MIT

Last pushed

Aug 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/huanngzh/EpiDiff"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.