GAIR-NLP/factool

FacTool: Factuality Detection in Generative AI

52
/ 100
Established

This tool helps evaluate the trustworthiness of text generated by large language models (LLMs) like ChatGPT. You input a prompt given to an LLM and the response it generated, and it outputs a detailed report indicating any factual errors across tasks such as knowledge-based Q&A, code generation, mathematical reasoning, and scientific literature reviews. This is ideal for anyone who relies on LLMs for accurate information, such as researchers, content creators, or software testers.

916 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to quickly and reliably check the factual accuracy of content, code, or scientific claims produced by AI models before using them.

Not ideal if you are looking for a tool to generate content or code yourself, rather than evaluate existing AI outputs.

AI-content-verification Generative-AI-quality fact-checking AI-assisted-research code-integrity
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

916

Forks

68

Language

Python

License

Apache-2.0

Last pushed

Aug 19, 2024

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/GAIR-NLP/factool"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.