Maluuba/nlg-eval

Evaluation code for various unsupervised automated metrics for Natural Language Generation.

49
/ 100
Emerging

This tool helps evaluate the quality of computer-generated text by comparing it against human-written examples. You provide the text produced by your system and one or more reference texts, and it calculates a suite of standard metrics. This is useful for researchers and developers working on systems that generate human-like language, such as chatbots or summarization tools.

1,391 stars. No commits in the last 6 months.

Use this if you need to objectively measure the performance of your Natural Language Generation system using a comprehensive set of automated metrics.

Not ideal if you need a qualitative assessment or want to understand *why* your generated text is good or bad, as this tool only provides quantitative scores.

natural-language-generation text-evaluation language-model-development dialog-system-evaluation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

1,391

Forks

227

Language

Python

License

Last pushed

Aug 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Maluuba/nlg-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.