pltrdy/files2rouge

Calculating ROUGE score between two files (line-by-line)

48
/ 100
Emerging

This tool helps researchers and developers evaluate the quality of automatically generated text summaries. You provide two text files: one with the reference summaries (the 'gold standard') and another with the summaries produced by a system. It then calculates the ROUGE scores (Recall-Oriented Understudy for Gisting Evaluation) for each line, providing a quantitative measure of how well the generated summaries match the references. This is useful for anyone working on text summarization, question answering, or machine translation tasks.

191 stars. No commits in the last 6 months.

Use this if you need to quickly and reliably compare a set of generated text summaries against a set of reference summaries, line by line.

Not ideal if you need to calculate ROUGE scores for individual documents without a corresponding file structure, or if you require a pure Python implementation for deeper integration.

text summarization natural language processing NLP evaluation information retrieval computational linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

191

Forks

52

Language

Perl

License

MIT

Last pushed

Jul 08, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/pltrdy/files2rouge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.