pyrouge and pythonrouge

These are competing implementations of the same Python ROUGE wrapper, both wrapping the original Perl ROUGE package for summarization evaluation metrics, so users should choose one based on API design preference and maintenance status rather than use both together.

pyrouge
49
Emerging
pythonrouge
47
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 23/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 249
Forks: 72
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 162
Forks: 34
Downloads:
Commits (30d): 0
Language: Perl
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About pyrouge

bheinzerling/pyrouge

A Python wrapper for the ROUGE summarization evaluation package

This tool helps researchers and developers working on text summarization evaluate the quality of their automatically generated summaries. It takes your plain text summaries and corresponding 'gold standard' reference summaries, then processes them to produce standardized ROUGE scores. Anyone building or comparing different text summarization models would use this to quantify performance.

natural-language-processing text-summarization academic-research model-evaluation content-analysis

About pythonrouge

tagucci/pythonrouge

Python wrapper for evaluating summarization quality by ROUGE package

This tool helps researchers and developers working on text summarization evaluate how good their automatically generated summaries are. You provide your system's summaries and a set of human-written reference summaries, and it calculates various ROUGE scores like ROUGE-1, ROUGE-2, and ROUGE-SU4. This is for anyone building or comparing automated summarization models.

text summarization natural language processing computational linguistics content evaluation

Scores updated daily from GitHub, PyPI, and npm data. How scores work