kaist-cvml/I-HallA-v1.0
[AAAI 2025] Official Implementation of I-HallA v1.0
This project helps evaluate how accurately text-to-image AI models generate images that reflect factual information. It takes an image generated by an AI model and a set of questions about its content, then uses a question-answering system to determine if the image accurately depicts the facts. This tool is for researchers and developers working on or using text-to-image models who need to assess their factual accuracy.
No commits in the last 6 months.
Use this if you need to rigorously test whether text-to-image generation models produce factually correct images based on input text, using an automated question-answering approach.
Not ideal if you are looking for a tool to improve the aesthetic quality or style of generated images, as its focus is solely on factual accuracy.
Stars
13
Forks
4
Language
Python
License
MIT
Category
Last pushed
Feb 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kaist-cvml/I-HallA-v1.0"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THU-BPM/MarkLLM
MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 System Demonstration)
git-disl/Vaccine
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large...
zjunlp/Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced...
voidism/DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality...