ndif-team/nnsight

The nnsight package enables interpreting and manipulating the internals of deep learned models.

71
/ 100
Verified

This tool helps AI researchers and machine learning engineers understand and modify how deep learning models, especially large language models, process information. You input a pre-trained PyTorch model and text prompts, and it allows you to observe or change the internal computations (activations) at any layer. The output provides insights into the model's internal workings or modified model behavior.

859 stars. Used by 1 other package. Actively maintained with 20 commits in the last 30 days. Available on PyPI.

Use this if you need to meticulously examine, debug, or causally intervene on the internal computations of a deep learning model to understand its decision-making process.

Not ideal if you are looking for a high-level API for model training, deployment, or general inference without needing deep internal inspection or manipulation.

AI Explainability Model Debugging ML Research LLM Interpretation Causal ML
Maintenance 17 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

859

Forks

79

Language

Python

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

20

Dependencies

12

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ndif-team/nnsight"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.