matsuolab/multibanana

[CVPR 2026 Main] MultiBanana: A Challenging Benchmark for Multi-Reference Text-to-Image Generation

24
/ 100
Experimental

This project provides a standardized way to test how well AI models can create new images based on multiple existing images and a text description. It takes a collection of reference images and a text prompt, then evaluates the AI-generated image against them. This is for AI researchers and developers who are building or evaluating text-to-image generation models, ensuring their models accurately reflect the provided references.

Use this if you are developing or comparing text-to-image AI models and need a reliable, consistent method to benchmark their performance, especially in scenarios requiring adherence to multiple visual references.

Not ideal if you are an end-user looking to generate images yourself, rather than to develop or evaluate the underlying AI generation models.

AI model evaluation image generation research computer vision benchmarks generative AI
No License No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 5 / 25
Community 0 / 25

How are scores calculated?

Stars

20

Forks

Language

Python

License

Last pushed

Mar 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/matsuolab/multibanana"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.