OpenMOSS/HalluQA
Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"
This project helps evaluate how often Chinese Large Language Models (LLMs) generate incorrect or made-up information, a problem known as hallucination. It provides a benchmark dataset of carefully designed Chinese questions, along with scripts to assess your model's answers. The output is a "non-hallucination rate" or accuracy score, indicating your model's reliability. This is for researchers, product managers, or anyone working with Chinese LLMs who needs to quantify and improve their models' factual accuracy.
136 stars. No commits in the last 6 months.
Use this if you need to measure and compare the hallucination rates of various Chinese Large Language Models, especially for tasks involving knowledge or sensitive information.
Not ideal if you are working with non-Chinese LLMs or if your primary concern is not model hallucination.
Stars
136
Forks
6
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OpenMOSS/HalluQA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...