armingh2000/FactScoreLite

FactScoreLite is an implementation of the FactScore metric, designed for detailed accuracy assessment in text generation. This package builds upon the framework provided by the original FactScore repository, which is no longer maintained and contains outdated functions.

41
/ 100
Emerging

This tool helps you evaluate the factual accuracy of text generated by AI models. You provide the AI-generated text and its original knowledge source, and it breaks down the text into individual facts, then checks each fact against the source. It's designed for anyone who needs to ensure their AI-generated content is truthful and consistent with reference materials.

No commits in the last 6 months. Available on PyPI.

Use this if you need a reliable and detailed way to assess the factual correctness of AI-generated text, particularly when you have a specific knowledge source to compare against.

Not ideal if you don't have a clear knowledge source for the AI-generated text or if you are looking for subjective quality assessments beyond factual accuracy.

AI content validation text generation quality fact-checking AI large language model evaluation content accuracy
Stale 6m
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

MIT

Last pushed

Apr 25, 2024

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/armingh2000/FactScoreLite"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.