pyrouge and rouge-metric

These two Python wrappers for ROUGE are competitors, as both aim to provide an interface for evaluating summarization quality, with `li-plus/rouge-metric` demonstrating significantly higher current adoption and potentially a more up-to-date re-implementation of the full ROUGE metrics.

pyrouge
49
Emerging
rouge-metric
31
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 23/25
Maintenance 0/25
Adoption 6/25
Maturity 25/25
Community 0/25
Stars: 249
Forks: 72
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 21
Forks:
Downloads:
Commits (30d): 0
Language: Perl
License: MIT
Stale 6m No Package No Dependents
Stale 6m

About pyrouge

bheinzerling/pyrouge

A Python wrapper for the ROUGE summarization evaluation package

This tool helps researchers and developers working on text summarization evaluate the quality of their automatically generated summaries. It takes your plain text summaries and corresponding 'gold standard' reference summaries, then processes them to produce standardized ROUGE scores. Anyone building or comparing different text summarization models would use this to quantify performance.

natural-language-processing text-summarization academic-research model-evaluation content-analysis

About rouge-metric

li-plus/rouge-metric

A Python wrapper of the official ROUGE-1.5.5.pl script and a re-implementation of full ROUGE metrics.

This tool helps researchers and developers automatically assess the quality of text summarization models. You input the summaries generated by your model and a set of human-written reference summaries. It then outputs various ROUGE scores, which are metrics indicating how well your generated summaries match the reference summaries. Anyone working with natural language processing, particularly in text generation or summarization, would find this useful.

text-summarization natural-language-processing NLP-evaluation content-analysis AI-model-assessment

Scores updated daily from GitHub, PyPI, and npm data. How scores work