chakki-works/sumeval
Well tested & Multi-language evaluation framework for text summarization.
This tool helps researchers and developers evaluate the quality of their text summarization models. You input a machine-generated summary alongside one or more human-written "reference" summaries. The tool then outputs scores like ROUGE and BLEU, which indicate how well your summary matches the references, supporting multiple languages like English, Japanese, and Chinese.
625 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you are building or comparing different text summarization systems and need a standardized way to measure how good their output summaries are.
Not ideal if you just need to generate summaries and are not concerned with quantitatively evaluating their performance against human standards.
Stars
625
Forks
58
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 15, 2022
Commits (30d)
0
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/chakki-works/sumeval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
zhang17173/Event-Extraction
基于法律裁判文书的事件抽取及其应用,包括数据的分词、词性标注、命名实体识别、事件要素抽取和判决结果预测等内容
wasiahmad/paraphrase_identification
Examine two sentences and determine whether they have the same meaning.
thuiar/TEXTOIR
TEXTOIR is the first opensource toolkit for text open intent recognition. (ACL 2021)
artitw/BERT_QA
Accelerating the development of question-answering systems based on BERT and TF 2.0
victordibia/neuralqa
NeuralQA: A Usable Library for Question Answering on Large Datasets with BERT