Betswish/Cross-Lingual-Consistency

Easy-to-use framework for evaluating cross-lingual consistency of factual knowledge (Supported LLaMA, BLOOM, mT5, RoBERTa, etc.) Paper here: https://aclanthology.org/2023.emnlp-main.658/

29
/ 100
Experimental

This framework helps AI researchers and developers assess if a multilingual language model provides consistent factual information across different languages. You input a large language model and specify two languages, and it outputs a score indicating how consistent the model's factual knowledge is between those languages. It's designed for those who work on developing or evaluating multilingual AI systems and need to ensure fairness and reliability across user languages.

No commits in the last 6 months.

Use this if you are a researcher or developer concerned with how consistently multilingual large language models retrieve factual knowledge across different languages.

Not ideal if you are an end-user simply looking to use a language model and not evaluate its internal cross-lingual consistency.

multilingual-AI language-model-evaluation factual-consistency AI-fairness NLP-research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

27

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Aug 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Betswish/Cross-Lingual-Consistency"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.