NoahVl/Explaining-In-Style-Reproducibility-Study

Re-implementation of the StylEx paper, training a GAN to explain a classifier in StyleSpace, paper by Lang et al. (2021).

40
/ 100
Emerging

This project helps machine learning researchers understand why an image classifier makes specific decisions. By inputting a trained image classifier and a dataset of images, it generates visual explanations in the form of modified images, showing which visual features (like hair color or facial expression) influence the classifier's output. This is useful for researchers who need to interpret and validate the behavior of their AI models.

No commits in the last 6 months.

Use this if you are a machine learning researcher who needs to visually understand the decision-making process of your image classification models.

Not ideal if you are looking for a tool to directly improve classifier accuracy or for non-image-based machine learning interpretability.

AI interpretability image classification machine learning research model explanation computer vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

38

Forks

8

Language

Jupyter Notebook

License

Last pushed

Dec 02, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/NoahVl/Explaining-In-Style-Reproducibility-Study"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.