lapalap/invert

Official GitHub for the paper "Labeling Neural Representations with Inverse Recognition"

26
/ 100
Experimental

This project helps machine learning researchers and practitioners understand what concepts their deep neural networks have learned. You input the activations from a trained neural network, and it outputs human-understandable labels describing what each part of the network (e.g., a neuron or feature) is responding to. This helps anyone working with neural networks to interpret their models.

Use this if you need to understand the internal workings of a deep neural network and map its complex representations to concepts you can comprehend.

Not ideal if you are looking for global model explanations that don't focus on individual neural representations or require extensive segmentation masks.

AI explainability neural network interpretation deep learning analysis model understanding machine learning research
No License No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Jupyter Notebook

License

Last pushed

Dec 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lapalap/invert"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.