AlignmentResearch/tuned-lens

Tools for understanding how transformer predictions are built layer-by-layer

45
/ 100
Emerging

Tuned Lens helps machine learning researchers understand how large language models make predictions. It takes an existing transformer model and reveals what the model 'thinks' at each internal step, even before it outputs the final word. This allows researchers to analyze the step-by-step reasoning process within the model.

574 stars. No commits in the last 6 months.

Use this if you want to deeply analyze the internal workings of transformer models to understand how they arrive at their final predictions.

Not ideal if you are looking to improve the performance of a model or train new models, as this tool focuses on interpretability rather than development.

AI interpretability large language models transformer analysis model debugging neural network understanding
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

574

Forks

62

Language

Python

License

MIT

Last pushed

Aug 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlignmentResearch/tuned-lens"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.