davidmascharka/tbd-nets
PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"
This tool helps researchers and AI practitioners understand why an AI model answers a question about an image in a particular way. You provide an image and a natural language question, and it gives you the answer along with a visual explanation of the model's 'thought process.' It's ideal for anyone who needs to audit or debug visual reasoning AI systems.
345 stars. No commits in the last 6 months.
Use this if you need to gain insight into how a visual reasoning AI arrives at its conclusions, beyond just getting a correct answer.
Not ideal if you are looking for a general-purpose image recognition or object detection tool without an emphasis on detailed interpretability.
Stars
345
Forks
74
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Dec 07, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/davidmascharka/tbd-nets"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models