Unbabel/COMET

A Neural Framework for MT Evaluation

57
/ 100
Established

This tool helps language service providers and machine translation researchers accurately assess the quality of machine-translated text. You input original source text, one or more machine-translated versions, and optionally a human-translated reference, and it outputs quality scores. It's designed for anyone managing or evaluating machine translation systems, like MT developers, localization managers, or translation quality assurance specialists.

723 stars.

Use this if you need a reliable and automated way to score machine translation outputs, understand their quality, and compare different MT systems.

Not ideal if you only need a quick, informal check of a single translation and don't require systematic, statistically significant quality evaluation.

machine-translation translation-quality-assurance localization language-AI natural-language-processing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

723

Forks

105

Language

Python

License

Apache-2.0

Last pushed

Mar 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Unbabel/COMET"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.