Maluuba/nlg-eval
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
This tool helps evaluate the quality of computer-generated text by comparing it against human-written examples. You provide the text produced by your system and one or more reference texts, and it calculates a suite of standard metrics. This is useful for researchers and developers working on systems that generate human-like language, such as chatbots or summarization tools.
1,391 stars. No commits in the last 6 months.
Use this if you need to objectively measure the performance of your Natural Language Generation system using a comprehensive set of automated metrics.
Not ideal if you need a qualitative assessment or want to understand *why* your generated text is good or bad, as this tool only provides quantitative scores.
Stars
1,391
Forks
227
Language
Python
License
—
Category
Last pushed
Aug 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Maluuba/nlg-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa