mala-lab/HaMI

[NeurIPS 2025] Official implementation for ''Robust Hallucination Detection in LLMs via Adaptive Token Selection'' https://arxiv.org/abs/2504.07863

25
/ 100
Experimental

This project helps ensure the accuracy of responses from large language models (LLMs) by detecting when they generate incorrect or made-up information, known as 'hallucinations.' It takes the LLM's generated text and identifies specific parts that are untruthful, providing a reliable check on the output. Anyone building or using LLM-powered applications, especially in sensitive domains, would benefit from this to maintain trustworthiness.

Use this if you need a robust way to automatically identify and flag 'hallucinations' in the text generated by your large language models, especially across diverse types of questions and answers.

Not ideal if you are looking for a tool that generates content or focuses on general LLM performance metrics rather than specifically targeting truthfulness and factual accuracy.

LLM-safety AI-reliability content-verification NLP-quality-assurance
No License No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

Last pushed

Oct 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mala-lab/HaMI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.