chakki-works/sumeval

Well tested & Multi-language evaluation framework for text summarization.

53
/ 100
Established

This tool helps researchers and developers evaluate the quality of their text summarization models. You input a machine-generated summary alongside one or more human-written "reference" summaries. The tool then outputs scores like ROUGE and BLEU, which indicate how well your summary matches the references, supporting multiple languages like English, Japanese, and Chinese.

625 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you are building or comparing different text summarization systems and need a standardized way to measure how good their output summaries are.

Not ideal if you just need to generate summaries and are not concerned with quantitatively evaluating their performance against human standards.

text summarization natural language processing AI model evaluation content generation machine translation
Stale 6m
Maintenance 0 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

625

Forks

58

Language

Python

License

Apache-2.0

Last pushed

Jul 15, 2022

Commits (30d)

0

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/chakki-works/sumeval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.