Smu-Tan/Remedy

[EMNLP2025] Remedy: Learning Machine Translation Evaluation from Human Preferences with Reward Modeling

33
/ 100
Emerging

ReMedy helps evaluate machine translation quality by learning from human preferences. You input source texts, machine-translated texts, and optionally human reference translations, and it outputs a quality score that aligns closely with how a human would rate the translation. This tool is ideal for translation service providers, researchers, or anyone needing to assess and compare the performance of different machine translation systems.

Use this if you need a highly accurate and human-aligned way to score and compare machine translation outputs, especially across many language pairs or for critical applications.

Not ideal if you don't have access to computing resources (GPUs) or if your primary need is for a simple, quick, rule-based translation check rather than a nuanced human-preference-aligned evaluation.

machine-translation localization natural-language-processing translation-quality-assurance language-AI
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Nov 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Smu-Tan/Remedy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.