gsarti/pecore

Materials for "Quantifying the Plausibility of Context Reliance in Neural Machine Translation" at ICLR'24 🐑 🐑

27
/ 100
Experimental

This project offers a framework called Plausibility Evaluation of Context Reliance (PECoRe) to assess how much language models, especially machine translation systems, genuinely understand and use surrounding text. It takes a translated text and its original context, then highlights which parts of the translation depend on the context and where those dependencies come from in the source text. Machine translation evaluators, language model researchers, and anyone concerned with the trustworthiness of AI translations can use this to scrutinize model behavior.

No commits in the last 6 months.

Use this if you need to understand whether a machine translation model is plausibly using surrounding sentences to generate translations, or if it's simply producing output without real contextual understanding.

Not ideal if you are looking for a general-purpose machine translation system to produce translations, as this tool focuses specifically on evaluating contextual reliance rather than performing translation.

Machine Translation Evaluation Language Model Interpretability AI Safety Contextual AI Analysis NLP Research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

15

Forks

1

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Apr 18, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gsarti/pecore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.