pythonrouge and rouge

These two tools are competitors, as both provide Python implementations of the ROUGE metric for evaluating summarization quality, forcing users to choose one over the other based on features, performance, or maintenance.

pythonrouge
47
Emerging
rouge
35
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 6/25
Maturity 16/25
Community 13/25
Stars: 162
Forks: 34
Downloads:
Commits (30d): 0
Language: Perl
License: MIT
Stars: 24
Forks: 4
Downloads:
Commits (30d): 0
Language: Perl
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About pythonrouge

tagucci/pythonrouge

Python wrapper for evaluating summarization quality by ROUGE package

This tool helps researchers and developers working on text summarization evaluate how good their automatically generated summaries are. You provide your system's summaries and a set of human-written reference summaries, and it calculates various ROUGE scores like ROUGE-1, ROUGE-2, and ROUGE-SU4. This is for anyone building or comparing automated summarization models.

text summarization natural language processing computational linguistics content evaluation

About rouge

neural-dialogue-metrics/rouge

An implementation of ROUGE family metrics for automatic summarization.

This tool helps researchers and developers working with natural language processing evaluate the quality of automatically generated summaries or translations. You input two pieces of text: a reference (the 'correct' version) and a candidate (the machine-generated version). It outputs scores (recall, precision, and F-measure) that indicate how well the candidate text matches the reference.

natural-language-processing text-summarization machine-translation nlp-evaluation computational-linguistics

Scores updated daily from GitHub, PyPI, and npm data. How scores work