poppingtonic/transformer-visualization

Mechanistic Interpretability Tutorials, Results and research log as I learn from publicly available research, and experimentation. Evolving work, open ended, slow updates. Lots of incomplete work.

29
/ 100
Experimental

This project helps AI researchers and students understand how large language models like Transformers make decisions. It takes a trained Transformer model and allows you to visualize the internal processing of individual tokens, revealing the 'why' behind its outputs. It also provides pre-generated datasets of specific sentence structures (like Indirect Object Identification) for focused interpretability studies.

No commits in the last 6 months.

Use this if you are a machine learning researcher or student focused on understanding the internal mechanisms of Transformer models, specifically for tasks like token processing and identifying 'induction heads'.

Not ideal if you are looking for a general-purpose model explanation tool for non-Transformer models or production-ready explainable AI (XAI) solutions for end-users.

AI interpretability Transformer models NLP research Mechanistic interpretability Large language models
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

Jupyter Notebook

License

Last pushed

Apr 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/poppingtonic/transformer-visualization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.