jim-berend/semanticlens

Mechanistic understanding and validation of large AI models with SemanticLens

35
/ 100
Emerging

This tool helps AI researchers and practitioners understand why large vision models make certain predictions. You provide your trained image model and a dataset, and it shows you what specific internal components (like neurons or filters) are 'seeing' or reacting to, translated into human-understandable concepts. This allows you to explain, debug, and validate the model's inner workings.

Use this if you need to gain a mechanistic understanding of what your large vision model has learned and how it processes visual information.

Not ideal if you are looking for a tool to train or fine-tune AI models, or if you primarily work with non-vision data types.

AI model explainability computer vision model validation machine learning research AI safety
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

51

Forks

2

Language

Python

License

BSD-3-Clause

Last pushed

Dec 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jim-berend/semanticlens"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.