Huffon/factsumm
FactSumm: Factual Consistency Scorer for Abstractive Summarization
This tool helps evaluate whether automatically generated summaries accurately reflect the original source material. You provide a longer article and a generated summary, and it produces a score indicating how factually consistent the summary is, highlighting any discrepancies. This is for developers working on text summarization systems who need to ensure their models produce reliable outputs.
113 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are a developer building or evaluating abstractive summarization models and need an automated way to check for factual consistency.
Not ideal if you are looking for a tool to manually summarize documents or need to evaluate human-written summaries.
Stars
113
Forks
10
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 01, 2024
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Huffon/factsumm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sildar/potara
Multi-document summarization tool relying on ILP and sentence fusion
hyunwoongko/summarizers
Package for controllable summarization
uoneway/Text-Summarization-Repo
텍스트 요약 분야의 주요 연구 주제, Must-read Papers, 이용 가능한 model 및 data 등을 추천 자료와 함께 정리한 저장소입니다.
yuhaozhang/summarize-radiology-findings
Code and pretrained model for paper "Learning to Summarize Radiology Findings"
Ravoxsg/SummaReranker
Source code for SummaReranker (ACL 2022)