ROUGE-2.0 and rouge
These two projects are competitors, as they both implement the ROUGE family of metrics for automatic summarization evaluation.
About ROUGE-2.0
kavgan/ROUGE-2.0
ROUGE automatic summarization evaluation toolkit. Support for ROUGE-[N, L, S, SU], stemming and stopwords in different languages, unicode text evaluation, CSV output.
This tool helps researchers and developers evaluate how good their automatically generated text summaries or translations are. You provide both the summary created by your system and one or more human-written 'reference' summaries. It then calculates various ROUGE scores, indicating how closely your system's output matches human quality, and outputs them in a CSV file. It's used by anyone developing or refining natural language processing models for summarization or translation.
About rouge
neural-dialogue-metrics/rouge
An implementation of ROUGE family metrics for automatic summarization.
This tool helps researchers and developers working with natural language processing evaluate the quality of automatically generated summaries or translations. You input two pieces of text: a reference (the 'correct' version) and a candidate (the machine-generated version). It outputs scores (recall, precision, and F-measure) that indicate how well the candidate text matches the reference.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work