SapienzaNLP/guardians-mt-eval

Official repository of the ACL 2024 paper "Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In!".

36
/ 100
Emerging

This project provides advanced tools for evaluating the quality of machine translations. It takes original source texts, candidate translations, or reference translations as input and outputs a numerical quality score for each translation, as well as an overall system score. This is useful for researchers, language service providers, or anyone working with machine translation systems who needs to accurately assess translation performance.

No commits in the last 6 months.

Use this if you need to reliably measure and compare the quality of different machine translation outputs or evaluate the 'translatability' of source texts.

Not ideal if you are looking for a simple, quick way to get a general sense of translation quality without detailed, segment-level analysis.

machine-translation translation-quality-assessment language-services linguistics-research localization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

10

Forks

5

Language

Python

License

Last pushed

Nov 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/SapienzaNLP/guardians-mt-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.