deepfx/netlens

A toolkit for interpreting and analyzing neural networks (vision)

29
/ 100
Experimental

This tool helps people who work with computer vision models understand what their neural networks are "looking" at and how they make decisions. You provide an image or a trained neural network, and it generates visual explanations, highlighting the most important parts of the input image for a specific prediction, or creating images that show what a network or a specific part of it has learned to recognize. This is useful for machine learning engineers, researchers, and data scientists developing and debugging vision models.

No commits in the last 6 months.

Use this if you need to visualize and interpret how your neural network processes images, attribute specific predictions to parts of an input image, or generate images that represent what different layers or classes within your model have learned.

Not ideal if you are looking for advanced or robust style transfer capabilities, as that feature is currently experiencing bugs and may not produce optimal results.

computer-vision neural-network-interpretation machine-learning-debugging image-analysis model-explainability
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

31

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 28, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/deepfx/netlens"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.