guillaumejs2403/TIME

Text-to-Image Models for Counterfactual Explanations: a black-box approach Official Code. WACV 2024

34
/ 100
Emerging

This project helps machine learning practitioners understand why an image classification model made a specific prediction. You input an existing image dataset, a pre-trained image classifier, and desired counterfactual explanations. The output is a set of modified images and the corresponding text descriptions that would cause the classifier to change its prediction. Data scientists and ML researchers can use this to debug and interpret their computer vision models.

No commits in the last 6 months.

Use this if you need to generate human-interpretable 'what if' scenarios for an image classifier to explain its decisions, by seeing what minimal changes would flip its prediction.

Not ideal if you're looking for explanations for non-image data, or if you need to directly modify the internal workings of a neural network rather than using a black-box approach.

Machine Learning Interpretability Computer Vision Model Debugging AI Explainability Image Classification
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

9

Forks

2

Language

Python

License

MIT

Last pushed

Nov 15, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/guillaumejs2403/TIME"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.