TIGER-AI-Lab/VIEScore

Visual Instruction-guided Explainable Metric. Code for "Towards Explainable Metrics for Conditional Image Synthesis Evaluation" (ACL 2024)

30
/ 100
Emerging

This tool helps researchers and developers evaluate the quality of images and videos generated by AI models. You input a generated image or video alongside the text instructions that were used to create it, and the tool outputs a detailed score, breaking down its semantic consistency, perceptual quality, and an overall rating. This is useful for anyone working on improving AI image and video generation systems.

No commits in the last 6 months.

Use this if you need an explainable, human-aligned metric to assess and compare the performance of different AI models that generate images or videos from text.

Not ideal if you are looking for a tool to generate images or videos yourself, rather than evaluate existing ones.

AI image generation video synthesis model evaluation generative AI computer vision research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

67

Forks

3

Language

Python

License

MIT

Last pushed

Nov 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/TIGER-AI-Lab/VIEScore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.