mohammadasim98/met3r
MEt3R: Measuring Multi-View Consistency in Generated Images
This tool helps researchers and engineers working with AI image generation models to automatically assess the quality and consistency of multi-view images. You input a pair of generated images that are supposed to represent different views of the same 3D scene. The tool then outputs a score indicating how consistent those views are with each other, helping you understand if your generative model is creating realistic 3D relationships between images.
163 stars.
Use this if you are developing or evaluating AI models that generate multiple images from different viewpoints, and you need an objective way to measure if those images are consistent with a shared 3D structure.
Not ideal if you are looking for metrics to evaluate general image quality, artistic style, or single-image realism, as it focuses specifically on consistency between views.
Stars
163
Forks
10
Language
Python
License
MIT
Category
Last pushed
Feb 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/mohammadasim98/met3r"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...