stellar-gen-ai/stellar-metrics

Official Code for the evaluation metrics of Stellar: Systematic Evaluation of Human-Centric Personalized Text-to-Image Methods

34
/ 100
Emerging

This tool helps researchers and developers evaluate how well their AI models generate personalized images from text descriptions, especially when trying to maintain a specific person's identity or specific objects. You provide your model's generated images and a dataset of original images and prompts, and it calculates various scores to tell you if the AI successfully preserved identities or depicted relationships accurately. It's designed for those building or comparing advanced text-to-image AI systems.

No commits in the last 6 months.

Use this if you are developing or benchmarking AI models that generate images from text, particularly when the generated images need to faithfully preserve a specific human identity or accurately represent objects and their relationships as described in the prompt.

Not ideal if you are looking for a simple tool to generate images or if your primary focus is not on evaluating the fidelity of personalized or object-centric image generation.

AI-model-evaluation personalized-image-generation text-to-image computer-vision-research generative-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

8

Forks

3

Language

Jupyter Notebook

License

Last pushed

Apr 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/stellar-gen-ai/stellar-metrics"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.