zjunlp/EasyDetect

[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.

29
/ 100
Experimental

When working with AI models that generate both text and images, EasyDetect helps you identify if the AI is making up information or presenting facts incorrectly. It takes the AI's visual input/output and its generated text, then tells you whether the claims are accurate or 'hallucinated'. This tool is for AI researchers and developers who are building or evaluating multimodal AI systems.

No commits in the last 6 months.

Use this if you need to systematically check if your Multimodal Large Language Models (MLLMs) are generating false or inconsistent information in their image descriptions or image generations.

Not ideal if you are looking for a general-purpose AI fact-checker for text-only models or if you are not directly involved in developing or evaluating MLLMs.

AI evaluation multimodal AI AI safety AI reliability large language models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

40

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Feb 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/zjunlp/EasyDetect"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.