guillaumejs2403/TIME
Text-to-Image Models for Counterfactual Explanations: a black-box approach Official Code. WACV 2024
This project helps machine learning practitioners understand why an image classification model made a specific prediction. You input an existing image dataset, a pre-trained image classifier, and desired counterfactual explanations. The output is a set of modified images and the corresponding text descriptions that would cause the classifier to change its prediction. Data scientists and ML researchers can use this to debug and interpret their computer vision models.
No commits in the last 6 months.
Use this if you need to generate human-interpretable 'what if' scenarios for an image classifier to explain its decisions, by seeing what minimal changes would flip its prediction.
Not ideal if you're looking for explanations for non-image data, or if you need to directly modify the internal workings of a neural network rather than using a black-box approach.
Stars
9
Forks
2
Language
Python
License
MIT
Category
Last pushed
Nov 15, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/guillaumejs2403/TIME"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...