li-plus/rouge-metric

A Python wrapper of the official ROUGE-1.5.5.pl script and a re-implementation of full ROUGE metrics.

31
/ 100
Emerging

This tool helps researchers and developers automatically assess the quality of text summarization models. You input the summaries generated by your model and a set of human-written reference summaries. It then outputs various ROUGE scores, which are metrics indicating how well your generated summaries match the reference summaries. Anyone working with natural language processing, particularly in text generation or summarization, would find this useful.

No commits in the last 6 months. Available on PyPI.

Use this if you need to objectively measure the quality of automatically generated text summaries against human-written references, in a fast and reliable way.

Not ideal if you are evaluating aspects of text quality beyond content overlap, such as grammar, fluency, or factual accuracy.

text-summarization natural-language-processing NLP-evaluation content-analysis AI-model-assessment
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

21

Forks

Language

Perl

License

MIT

Last pushed

Feb 26, 2021

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/li-plus/rouge-metric"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.