intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
This tool helps ensure the accuracy of responses from large language models (LLMs) by detecting when they generate incorrect or made-up information, known as 'hallucinations.' It takes a question and a target answer as input, then checks the LLM's consistency across multiple variations of the question and against additional verification models. The output indicates how reliably the LLM provides factual answers, making it useful for anyone deploying LLMs in production applications where accuracy is critical.
No commits in the last 6 months.
Use this if you need to confidently assess whether a large language model's outputs are factually correct and consistent, especially when the LLM is a 'black-box' system you can't directly inspect.
Not ideal if you are looking for a tool to improve the stylistic quality or fluency of LLM outputs, as its focus is solely on factual accuracy and consistency.
Stars
39
Forks
7
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jan 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/intuit/sac3"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
HillZhang1999/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI...