lilt/tec

Evaluation code and data for "Automatic Correction of Human Translations" [NAACL 2022].

14
/ 100
Experimental

This project helps translation managers and quality assurance specialists evaluate the effectiveness of automated tools that correct human-made translations. You provide English source sentences, initial German translations, and the automatically corrected German translations, and it calculates key quality metrics. This is for professionals overseeing translation workflows and assessing machine translation post-editing.

No commits in the last 6 months.

Use this if you need to objectively measure the quality and identify errors in machine-corrected human translations for marketing, technical, or general content.

Not ideal if you are looking for a translation tool itself, or if you only need a simple word-count based quality check.

translation-quality-assurance localization content-management language-services translation-evaluation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

19

Forks

Language

Perl

License

Last pushed

Dec 09, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/lilt/tec"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.