voidism/Lookback-Lens

Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"

32
/ 100
Emerging

When large language models (LLMs) generate summaries or answers, they sometimes 'hallucinate' or invent details not present in the original text. This project provides a way to detect these inaccuracies by analyzing how the LLM pays attention to the source material versus its own generated words. It helps AI product managers, content creators using AI, or researchers identify and reduce made-up information in LLM outputs, improving reliability. Input is an LLM's generated text and the source context; output is a flag for hallucination.

147 stars. No commits in the last 6 months.

Use this if you need to build more trustworthy AI applications by automatically flagging or reducing instances where your LLM generates factually incorrect or unsupported information compared to the provided context.

Not ideal if your primary concern is grammatical errors, stylistic issues, or general factual accuracy that doesn't stem from direct misrepresentation of a given source context.

AI content quality assurance LLM fact-checking NLP reliability Generative AI ethics Information retrieval
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

147

Forks

12

Language

Python

License

Last pushed

Oct 13, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/voidism/Lookback-Lens"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.