pltrdy/files2rouge
Calculating ROUGE score between two files (line-by-line)
This tool helps researchers and developers evaluate the quality of automatically generated text summaries. You provide two text files: one with the reference summaries (the 'gold standard') and another with the summaries produced by a system. It then calculates the ROUGE scores (Recall-Oriented Understudy for Gisting Evaluation) for each line, providing a quantitative measure of how well the generated summaries match the references. This is useful for anyone working on text summarization, question answering, or machine translation tasks.
191 stars. No commits in the last 6 months.
Use this if you need to quickly and reliably compare a set of generated text summaries against a set of reference summaries, line by line.
Not ideal if you need to calculate ROUGE scores for individual documents without a corresponding file structure, or if you require a pure Python implementation for deeper integration.
Stars
191
Forks
52
Language
Perl
License
MIT
Category
Last pushed
Jul 08, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/pltrdy/files2rouge"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kenlimmj/rouge
A Javascript implementation of the Recall-Oriented Understudy for Gisting Evaluation (ROUGE)...
uoneway/KoBertSum
KoBertSum은 BertSum모델을 한국어 데이터에 적용할 수 있도록 수정한 한국어 요약 모델입니다.
udibr/headlines
Automatically generate headlines to short articles
bheinzerling/pyrouge
A Python wrapper for the ROUGE summarization evaluation package
xiongma/transformer-pointer-generator
A Abstractive Summarization Implementation with Transformer and Pointer-generator