TIGER-AI-Lab/ImagenHub

A one-stop library to standardize the inference and evaluation of all the conditional image generation models. [ICLR 2024]

46
/ 100
Emerging

This library helps researchers and practitioners working with AI image generation models to reliably compare how well different models perform. You feed in a text prompt or an existing image, and it generates an image from various models. The output includes the generated images and scores that tell you how semantically consistent and perceptually high-quality those images are. It's for anyone involved in developing, evaluating, or selecting conditional image generation models.

178 stars.

Use this if you need to systematically benchmark and compare the performance of multiple conditional image generation models against standardized tasks and metrics.

Not ideal if you are looking for an application to simply generate a single image for creative use without needing to compare models or evaluate their performance.

AI-image-generation generative-AI-evaluation model-benchmarking computer-vision-research text-to-image
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

178

Forks

19

Language

Python

License

MIT

Last pushed

Dec 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/TIGER-AI-Lab/ImagenHub"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.