neural-dialogue-metrics/rouge

An implementation of ROUGE family metrics for automatic summarization.

35
/ 100
Emerging

This tool helps researchers and developers working with natural language processing evaluate the quality of automatically generated summaries or translations. You input two pieces of text: a reference (the 'correct' version) and a candidate (the machine-generated version). It outputs scores (recall, precision, and F-measure) that indicate how well the candidate text matches the reference.

No commits in the last 6 months.

Use this if you need a reliable and fast Python-based method to automatically score the quality of text summarization or machine translation outputs against a reference, without relying on external Perl scripts.

Not ideal if you need a tool that handles text preprocessing like tokenization, stemming, or stopword removal for you.

natural-language-processing text-summarization machine-translation nlp-evaluation computational-linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

24

Forks

4

Language

Perl

License

MIT

Last pushed

Jan 07, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/neural-dialogue-metrics/rouge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.