IAAR-Shanghai/UHGEval-dataset

The full pipeline of creating UHGEval hallucination dataset

13
/ 100
Experimental

This project provides a comprehensive pipeline for creating a dataset specifically designed to evaluate factual 'hallucinations' in AI-generated news continuations. It takes raw news articles, processes them, generates potential AI-written continuations, and then labels which of these continuations contain factual errors, ultimately producing a curated dataset for AI model evaluation. It's intended for researchers or developers working on large language models (LLMs) and natural language generation (NLG) in news contexts.

No commits in the last 6 months.

Use this if you need a structured, pre-processed dataset to rigorously test and improve the factual accuracy of AI models that summarize or extend news content.

Not ideal if you are looking for a tool to generate news articles or summaries directly, or if your primary interest is in evaluating general text generation quality rather than factual accuracy.

AI-evaluation news-analysis natural-language-generation large-language-models dataset-creation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

Last pushed

Feb 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/IAAR-Shanghai/UHGEval-dataset"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.