Unbabel/COMET
A Neural Framework for MT Evaluation
This tool helps language service providers and machine translation researchers accurately assess the quality of machine-translated text. You input original source text, one or more machine-translated versions, and optionally a human-translated reference, and it outputs quality scores. It's designed for anyone managing or evaluating machine translation systems, like MT developers, localization managers, or translation quality assurance specialists.
723 stars.
Use this if you need a reliable and automated way to score machine translation outputs, understand their quality, and compare different MT systems.
Not ideal if you only need a quick, informal check of a single translation and don't require systematic, statistically significant quality evaluation.
Stars
723
Forks
105
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Unbabel/COMET"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
process-intelligence-solutions/pm4py
Official public repository for PM4Py (Process Mining for Python) — an open-source library for...
autogluon/autogluon
Fast and Accurate ML in 3 Lines of Code
microsoft/FLAML
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
shankarpandala/lazypredict
Lazy Predict help build a lot of basic models without much code and helps understand which...
aimclub/FEDOT
Automated modeling and machine learning framework FEDOT