boomb0om/text2image-benchmark

Benchmark for generative image models

34
/ 100
Emerging

This project helps researchers and developers evaluate the quality of text-to-image models. It takes a set of generated images and their corresponding text descriptions, or directly uses a text-to-image model, and outputs standardized metrics like FID and CLIP-score. This is primarily for machine learning engineers, AI researchers, and data scientists working with generative AI.

108 stars. No commits in the last 6 months.

Use this if you need to objectively compare the performance and image generation quality of different text-to-image AI models using established metrics.

Not ideal if you are an end-user simply looking to generate images and do not need to evaluate model performance scientifically.

generative-AI image-generation model-evaluation deep-learning-research computer-vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

108

Forks

6

Language

Jupyter Notebook

License

MIT

Last pushed

Sep 09, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/boomb0om/text2image-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.