daniel-furman/polyglot-or-not
Are foundation LMs multilingual knowledge bases? (EMNLP 2023)
This project helps developers and researchers evaluate how well large language models (LLMs) recall factual information across 20 different languages. It takes a list of factual associations (like 'The capital of France is Paris') and assesses whether an LLM correctly predicts the right answer over a set of plausible false ones. The primary users are AI/ML researchers and practitioners building or evaluating multilingual LLMs.
No commits in the last 6 months.
Use this if you need to quantitatively measure the multilingual factual knowledge of an LLM and compare its performance across various languages.
Not ideal if you cannot access vocabulary-wide probabilities from the LLM you want to evaluate (e.g., some closed-source models like GPT-4).
Stars
19
Forks
—
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Dec 08, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/daniel-furman/polyglot-or-not"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/transformers
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in...
kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
pbloem/former
Simple transformer implementation from scratch in pytorch. (archival, latest version on codeberg)
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
kyegomez/SimplifiedTransformers
SimplifiedTransformer simplifies transformer block without affecting training. Skip connections,...