ROUGE-2.0 and rouge

These two projects are competitors, as they both implement the ROUGE family of metrics for automatic summarization evaluation.

ROUGE-2.0
45
Emerging
rouge
35
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 19/25
Maintenance 0/25
Adoption 6/25
Maturity 16/25
Community 13/25
Stars: 220
Forks: 37
Downloads:
Commits (30d): 0
Language: Java
License: Apache-2.0
Stars: 24
Forks: 4
Downloads:
Commits (30d): 0
Language: Perl
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About ROUGE-2.0

kavgan/ROUGE-2.0

ROUGE automatic summarization evaluation toolkit. Support for ROUGE-[N, L, S, SU], stemming and stopwords in different languages, unicode text evaluation, CSV output.

This tool helps researchers and developers evaluate how good their automatically generated text summaries or translations are. You provide both the summary created by your system and one or more human-written 'reference' summaries. It then calculates various ROUGE scores, indicating how closely your system's output matches human quality, and outputs them in a CSV file. It's used by anyone developing or refining natural language processing models for summarization or translation.

text-summarization machine-translation natural-language-processing model-evaluation linguistics

About rouge

neural-dialogue-metrics/rouge

An implementation of ROUGE family metrics for automatic summarization.

This tool helps researchers and developers working with natural language processing evaluate the quality of automatically generated summaries or translations. You input two pieces of text: a reference (the 'correct' version) and a candidate (the machine-generated version). It outputs scores (recall, precision, and F-measure) that indicate how well the candidate text matches the reference.

natural-language-processing text-summarization machine-translation nlp-evaluation computational-linguistics

Scores updated daily from GitHub, PyPI, and npm data. How scores work