Huffon/factsumm

FactSumm: Factual Consistency Scorer for Abstractive Summarization

45
/ 100
Emerging

This tool helps evaluate whether automatically generated summaries accurately reflect the original source material. You provide a longer article and a generated summary, and it produces a score indicating how factually consistent the summary is, highlighting any discrepancies. This is for developers working on text summarization systems who need to ensure their models produce reliable outputs.

113 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are a developer building or evaluating abstractive summarization models and need an automated way to check for factual consistency.

Not ideal if you are looking for a tool to manually summarize documents or need to evaluate human-written summaries.

natural-language-processing text-summarization model-evaluation nlp-development ai-testing
Stale 6m
Maintenance 0 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 11 / 25

How are scores calculated?

Stars

113

Forks

10

Language

Python

License

Apache-2.0

Last pushed

Jan 01, 2024

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Huffon/factsumm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.