openfactcheck-research/openfactcheck

An Open-source Factuality Evaluation Demo for LLMs

56
/ 100
Established

This tool helps researchers and content creators assess the factual accuracy of information generated by large language models (LLMs). You provide a text response from an LLM and it helps you pinpoint and evaluate any claims that might be factually incorrect or unsupported. It's designed for anyone who needs to ensure the output of AI models is trustworthy and accurate.

Available on PyPI.

Use this if you need to systematically check the factual correctness of text produced by AI, such as summaries, articles, or answers.

Not ideal if you're looking for a tool to generate original, fact-checked content directly, as it focuses on evaluating existing AI output.

content-verification AI-ethics research-integrity information-quality journalism-support
No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

24

Forks

5

Language

Python

License

GPL-3.0

Last pushed

Feb 23, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/openfactcheck-research/openfactcheck"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.