cisnlp/MEXA

🔍 Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment

28
/ 100
Experimental

This tool helps you evaluate how well an English-centric large language model (LLM) understands other languages. You provide a dataset of parallel sentences (e.g., English and Spanish), and it calculates an "alignment score" that shows how similar the LLM's understanding of different languages is to its English understanding. This helps AI researchers and engineers understand how effective an LLM will be for multilingual applications without extensive testing.

No commits in the last 6 months.

Use this if you need to quickly estimate how well an English-centric LLM will perform on tasks in various non-English languages based on its English performance.

Not ideal if you need a direct, task-specific performance metric for a non-English language or if your LLM is not primarily English-centric.

LLM evaluation multilingual NLP AI model assessment cross-lingual transfer natural language processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Apr 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/cisnlp/MEXA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.