TIGER-AI-Lab/VIEScore
Visual Instruction-guided Explainable Metric. Code for "Towards Explainable Metrics for Conditional Image Synthesis Evaluation" (ACL 2024)
This tool helps researchers and developers evaluate the quality of images and videos generated by AI models. You input a generated image or video alongside the text instructions that were used to create it, and the tool outputs a detailed score, breaking down its semantic consistency, perceptual quality, and an overall rating. This is useful for anyone working on improving AI image and video generation systems.
No commits in the last 6 months.
Use this if you need an explainable, human-aligned metric to assess and compare the performance of different AI models that generate images or videos from text.
Not ideal if you are looking for a tool to generate images or videos yourself, rather than evaluate existing ones.
Stars
67
Forks
3
Language
Python
License
MIT
Category
Last pushed
Nov 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/TIGER-AI-Lab/VIEScore"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Vchitect/VBench
[CVPR2024 Highlight] VBench - We Evaluate Video Generation
VectorSpaceLab/OmniGen
OmniGen: Unified Image Generation. https://arxiv.org/pdf/2409.11340
EndlessSora/focal-frequency-loss
[ICCV 2021] Focal Frequency Loss for Image Reconstruction and Synthesis
JIA-Lab-research/DreamOmni2
This project is the official implementation of 'DreamOmni2: Multimodal Instruction-based Editing...
SkyworkAI/UniPic
Open-source SOTA multi-image editing model