SapienzaNLP/guardians-mt-eval
Official repository of the ACL 2024 paper "Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In!".
This project provides advanced tools for evaluating the quality of machine translations. It takes original source texts, candidate translations, or reference translations as input and outputs a numerical quality score for each translation, as well as an overall system score. This is useful for researchers, language service providers, or anyone working with machine translation systems who needs to accurately assess translation performance.
No commits in the last 6 months.
Use this if you need to reliably measure and compare the quality of different machine translation outputs or evaluate the 'translatability' of source texts.
Not ideal if you are looking for a simple, quick way to get a general sense of translation quality without detailed, segment-level analysis.
Stars
10
Forks
5
Language
Python
License
—
Category
Last pushed
Nov 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/SapienzaNLP/guardians-mt-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa