TIGER-AI-Lab/ImagenWorld

Stress-Testing Image Generation Models with Explainable Human Evaluation on Open-ended Real-World Tasks [ICLR 2026]

32
/ 100
Emerging

This tool helps researchers and engineers rigorously evaluate how well image generation and editing AI models perform in real-world scenarios across various domains like artworks, photorealistic images, and information graphics. You input a generative AI model and a set of test conditions (like text prompts or reference images), and it outputs detailed evaluations with scalar ratings and specific failure tags for objects or segments within the generated images. It's designed for AI model developers and researchers who need to benchmark and understand the limitations of their image generation systems.

Use this if you are developing or researching image generation AI and need a comprehensive, human-centric benchmark to stress-test models across diverse real-world tasks and understand specific failure modes.

Not ideal if you are an end-user simply looking to generate or edit images without needing to evaluate the underlying model's performance.

AI model evaluation Generative AI research Image generation benchmarking Computer vision development
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

28

Forks

Language

Python

License

MIT

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/TIGER-AI-Lab/ImagenWorld"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.