Butanium/llm-lang-agnostic

minimal code to reproduce results from Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers

29
/ 100
Experimental

This project helps researchers in natural language processing and AI interpret how large language models understand concepts across different languages. It takes pre-trained multilingual language models and a dataset of words and their translations/definitions, then reveals how the model processes these concepts internally. The output helps machine learning scientists understand if a model's understanding of a concept is truly language-agnostic or tied to specific linguistic expressions.

No commits in the last 6 months.

Use this if you are a machine learning scientist or NLP researcher seeking to understand the underlying, language-independent conceptual representations within multilingual transformer models.

Not ideal if you are looking for a tool to build or fine-tune an NLP application, or if you need to translate text or generate content.

natural-language-processing ai-interpretability multilingual-models computational-linguistics machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

13

Forks

3

Language

Jupyter Notebook

License

Last pushed

Sep 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Butanium/llm-lang-agnostic"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.