fannie1208/FactTest

[ICML2025] "FactTest: Factuality Testing in Large Language Models with Finite-Sample and Distribution-Free Guarantees"

23
/ 100
Experimental

This tool helps researchers and AI practitioners systematically assess how truthful a Large Language Model (LLM) is when it generates text. You provide the LLM you want to test and a calibration dataset, and it produces a statistical measure of its factual accuracy, backed by strong statistical guarantees. It's designed for those who need to rigorously quantify and report the factuality of LLMs.

No commits in the last 6 months.

Use this if you need to scientifically test and report the factuality of an LLM with reliable, statistically-sound metrics.

Not ideal if you're looking for a simple, quick way to get a subjective sense of an LLM's general truthfulness without deep statistical analysis.

LLM evaluation AI fact-checking natural language processing research model reliability statistical testing
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Python

License

Last pushed

May 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/fannie1208/FactTest"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.