Mattbusel/LLM-Hallucination-Detection-Script
A comprehensive toolkit for detecting potential hallucinations in LLM responses. Compatible with any LLM API (OpenAI, Anthropic, local models, etc.)
This toolkit helps developers and engineers ensure the reliability of AI applications by identifying when Large Language Models (LLMs) might be producing incorrect or fabricated information. You provide an LLM's text output, optionally with the original context, and it returns a probability score indicating potential hallucinations, along with specific issues found and actionable recommendations. It's designed for AI/ML engineers or product developers integrating LLMs into production systems.
Use this if you are building or deploying LLM-powered applications and need a robust way to automatically flag or prevent the model from generating factually incorrect or nonsensical content.
Not ideal if you are an end-user simply interacting with an LLM and want to check individual responses, as this is a toolkit for developers to integrate into their systems.
Stars
15
Forks
1
Language
Makefile
License
—
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Mattbusel/LLM-Hallucination-Detection-Script"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...