Mattbusel/LLM-Hallucination-Detection-Script

A comprehensive toolkit for detecting potential hallucinations in LLM responses. Compatible with any LLM API (OpenAI, Anthropic, local models, etc.)

28
/ 100
Experimental

This toolkit helps developers and engineers ensure the reliability of AI applications by identifying when Large Language Models (LLMs) might be producing incorrect or fabricated information. You provide an LLM's text output, optionally with the original context, and it returns a probability score indicating potential hallucinations, along with specific issues found and actionable recommendations. It's designed for AI/ML engineers or product developers integrating LLMs into production systems.

Use this if you are building or deploying LLM-powered applications and need a robust way to automatically flag or prevent the model from generating factually incorrect or nonsensical content.

Not ideal if you are an end-user simply interacting with an LLM and want to check individual responses, as this is a toolkit for developers to integrate into their systems.

AI-safety LLM-evaluation AI-reliability NLP-engineering production-AI
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 7 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Makefile

License

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Mattbusel/LLM-Hallucination-Detection-Script"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.