li-plus/rouge-metric
A Python wrapper of the official ROUGE-1.5.5.pl script and a re-implementation of full ROUGE metrics.
This tool helps researchers and developers automatically assess the quality of text summarization models. You input the summaries generated by your model and a set of human-written reference summaries. It then outputs various ROUGE scores, which are metrics indicating how well your generated summaries match the reference summaries. Anyone working with natural language processing, particularly in text generation or summarization, would find this useful.
No commits in the last 6 months. Available on PyPI.
Use this if you need to objectively measure the quality of automatically generated text summaries against human-written references, in a fast and reliable way.
Not ideal if you are evaluating aspects of text quality beyond content overlap, such as grammar, fluency, or factual accuracy.
Stars
21
Forks
—
Language
Perl
License
MIT
Category
Last pushed
Feb 26, 2021
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/li-plus/rouge-metric"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kenlimmj/rouge
A Javascript implementation of the Recall-Oriented Understudy for Gisting Evaluation (ROUGE)...
uoneway/KoBertSum
KoBertSum은 BertSum모델을 한국어 데이터에 적용할 수 있도록 수정한 한국어 요약 모델입니다.
udibr/headlines
Automatically generate headlines to short articles
bheinzerling/pyrouge
A Python wrapper for the ROUGE summarization evaluation package
xiongma/transformer-pointer-generator
A Abstractive Summarization Implementation with Transformer and Pointer-generator