Smu-Tan/Remedy
[EMNLP2025] Remedy: Learning Machine Translation Evaluation from Human Preferences with Reward Modeling
ReMedy helps evaluate machine translation quality by learning from human preferences. You input source texts, machine-translated texts, and optionally human reference translations, and it outputs a quality score that aligns closely with how a human would rate the translation. This tool is ideal for translation service providers, researchers, or anyone needing to assess and compare the performance of different machine translation systems.
Use this if you need a highly accurate and human-aligned way to score and compare machine translation outputs, especially across many language pairs or for critical applications.
Not ideal if you don't have access to computing resources (GPUs) or if your primary need is for a simple, quick, rule-based translation check rather than a nuanced human-preference-aligned evaluation.
Stars
14
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Smu-Tan/Remedy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
n-waves/multifit
The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model...
princeton-nlp/SimCSE
[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
yxuansu/SimCTG
[NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation
alibaba-edu/simple-effective-text-matching
Source code of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".
Shark-NLP/OpenICL
OpenICL is an open-source framework to facilitate research, development, and prototyping of...