voidism/Lookback-Lens
Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"
When large language models (LLMs) generate summaries or answers, they sometimes 'hallucinate' or invent details not present in the original text. This project provides a way to detect these inaccuracies by analyzing how the LLM pays attention to the source material versus its own generated words. It helps AI product managers, content creators using AI, or researchers identify and reduce made-up information in LLM outputs, improving reliability. Input is an LLM's generated text and the source context; output is a flag for hallucination.
147 stars. No commits in the last 6 months.
Use this if you need to build more trustworthy AI applications by automatically flagging or reducing instances where your LLM generates factually incorrect or unsupported information compared to the provided context.
Not ideal if your primary concern is grammatical errors, stylistic issues, or general factual accuracy that doesn't stem from direct misrepresentation of a given source context.
Stars
147
Forks
12
Language
Python
License
—
Category
Last pushed
Oct 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/voidism/Lookback-Lens"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THU-BPM/MarkLLM
MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 System Demonstration)
git-disl/Vaccine
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large...
zjunlp/Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced...
voidism/DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality...