kavgan/ROUGE-2.0

ROUGE automatic summarization evaluation toolkit. Support for ROUGE-[N, L, S, SU], stemming and stopwords in different languages, unicode text evaluation, CSV output.

45
/ 100
Emerging

This tool helps researchers and developers evaluate how good their automatically generated text summaries or translations are. You provide both the summary created by your system and one or more human-written 'reference' summaries. It then calculates various ROUGE scores, indicating how closely your system's output matches human quality, and outputs them in a CSV file. It's used by anyone developing or refining natural language processing models for summarization or translation.

220 stars. No commits in the last 6 months.

Use this if you need a reliable and standardized way to quantify the quality of summaries or translations produced by your machine learning models.

Not ideal if you're looking for a tool that generates summaries or translations itself, rather than evaluating them.

text-summarization machine-translation natural-language-processing model-evaluation linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

220

Forks

37

Language

Java

License

Apache-2.0

Last pushed

Apr 09, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/kavgan/ROUGE-2.0"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.