141forever/UncerSema4HalluDetec
This is the repository for the paper 'Enhancing Uncertainty Modeling with Semantic Graph for Hallucination Detection' (AAAI2025)
This project helps you identify when Large Language Models (LLMs) generate non-factual or unfaithful information, often called "hallucinations." It takes text generated by an LLM as input and determines the likelihood of hallucination at the token, sentence, and passage levels. This is useful for anyone relying on LLM outputs for critical tasks, such as content creators, researchers, or data analysts, who need to ensure the accuracy of generated text.
No commits in the last 6 months.
Use this if you need to systematically assess and reduce the risk of misinformation from LLM-generated content in your applications.
Not ideal if you are looking for a tool to generate text or improve the factual accuracy of an LLM through direct training.
Stars
8
Forks
1
Language
Python
License
—
Category
Last pushed
Apr 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/141forever/UncerSema4HalluDetec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THU-BPM/MarkLLM
MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 System Demonstration)
git-disl/Vaccine
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large...
zjunlp/Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced...
voidism/DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality...