OpenKG-ORG/EasyDetect

An Easy-to-use Hallucination Detection Framework for LLMs.

32
/ 100
Emerging

EasyDetect helps ensure that responses from large language models that process both images and text (like GPT-4V or Gemini) are accurate and don't "make things up." It takes an image and a text description, or a text prompt and a generated image, and checks for inconsistencies. This tool is for researchers, developers, or quality assurance teams working with multimodal AI to verify the reliability of AI-generated content.

No commits in the last 6 months.

Use this if you need to systematically identify and categorize factual errors or inconsistencies (hallucinations) in text generated by AI models based on images, or images generated by AI models based on text.

Not ideal if you are only working with text-based AI models or if you need to detect issues beyond factual hallucinations, such as bias or toxicity.

AI content verification multimodal AI evaluation AI quality assurance large language model safety generative AI research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

63

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Apr 21, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/OpenKG-ORG/EasyDetect"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.