DAMO-NLP-SG/LLM-Multilingual-Knowledge-Boundaries

[ACL 2025] Analyzing LLMs' Multilingual Knowledge Boundary Cognition Across Languages Through the Lens of Internal Representations

37
/ 100
Emerging

This project helps AI researchers and practitioners understand how Large Language Models (LLMs) determine if they know the answer to a question across different languages. By analyzing the model's internal processing, it takes multilingual question-answer pairs (some known, some unknown) and reveals where and how knowledge boundaries are perceived within the model's layers. This is crucial for anyone working on improving the reliability and reducing 'hallucinations' in multilingual LLMs, especially in low-resource languages.

Use this if you are a researcher or engineer investigating why LLMs sometimes "hallucinate" or confidently provide incorrect information, particularly when dealing with questions in multiple languages.

Not ideal if you are looking for a ready-to-use LLM application or a tool for general text generation or translation.

AI-safety LLM-evaluation multilingual-NLP hallucination-prevention AI-research
No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

18

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DAMO-NLP-SG/LLM-Multilingual-Knowledge-Boundaries"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.