aimonlabs/hallucination-detection-model
HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification
This tool helps content creators, researchers, and anyone relying on AI-generated text verify the accuracy of information. You provide an AI-generated response alongside the original prompt and any source context, and it tells you if the AI made up facts or got details wrong. This is for professionals who need to ensure the factual correctness of AI output, like journalists, technical writers, or legal assistants.
No commits in the last 6 months.
Use this if you need to automatically identify and score fabricated or incorrect information within text generated by large language models, ensuring factual accuracy.
Not ideal if you're looking for a tool to improve the grammar or style of AI-generated content, as its focus is solely on factual correctness.
Stars
11
Forks
—
Language
Python
License
—
Category
Last pushed
May 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aimonlabs/hallucination-detection-model"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
limix-ldm-ai/LimiX
LimiX: Unleashing Structured-Data Modeling Capability for Generalist Intelligence...
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
google-research/plur
PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets...
YalaLab/pillar-finetune
Finetuning framework for Pillar medical imaging models.
thuml/LogME
Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML...