openfactcheck-research/openfactcheck
An Open-source Factuality Evaluation Demo for LLMs
This tool helps researchers and content creators assess the factual accuracy of information generated by large language models (LLMs). You provide a text response from an LLM and it helps you pinpoint and evaluate any claims that might be factually incorrect or unsupported. It's designed for anyone who needs to ensure the output of AI models is trustworthy and accurate.
Available on PyPI.
Use this if you need to systematically check the factual correctness of text produced by AI, such as summaries, articles, or answers.
Not ideal if you're looking for a tool to generate original, fact-checked content directly, as it focuses on evaluating existing AI output.
Stars
24
Forks
5
Language
Python
License
GPL-3.0
Category
Last pushed
Feb 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/openfactcheck-research/openfactcheck"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
several27/FakeNewsCorpus
A dataset of millions of news articles scraped from a curated list of data sources.
lilakk/BooookScore
A package to generate summaries of long-form text and evaluate the coherence of these summaries....
Cartus/Automated-Fact-Checking-Resources
Links to conference/journal publications in automated fact-checking (resources for the...
armingh2000/FactScoreLite
FactScoreLite is an implementation of the FactScore metric, designed for detailed accuracy...
manideep2510/siamese-BERT-fake-news-detection-LIAR
Triple Branch BERT Siamese Network for fake news classification on LIAR-PLUS dataset in PyTorch