kenlimmj/rouge

A Javascript implementation of the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) evaluation metric for summaries.

66
/ 100
Established

This tool helps developers working on natural language processing projects to automatically evaluate the quality of text summaries. You provide a summary generated by your system and one or more reference summaries written by humans, and it outputs a score indicating how well your summary matches the references. This is designed for software engineers building and testing text summarization algorithms.

Used by 11 other packages. Available on npm.

Use this if you are a developer building or improving an automated text summarization system and need a programmatic way to benchmark its performance against human-written summaries.

Not ideal if you are looking for a general-purpose text analysis tool or an application for end-users, as this is a developer library.

natural-language-processing text-summarization algorithm-evaluation software-development data-science
Maintenance 10 / 25
Adoption 13 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

44

Forks

13

Language

JavaScript

License

MIT

Last pushed

Mar 09, 2026

Commits (30d)

0

Dependencies

1

Reverse dependents

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/kenlimmj/rouge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.