davidmascharka/tbd-nets

PyTorch implementation of "Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning"

49
/ 100
Emerging

This tool helps researchers and AI practitioners understand why an AI model answers a question about an image in a particular way. You provide an image and a natural language question, and it gives you the answer along with a visual explanation of the model's 'thought process.' It's ideal for anyone who needs to audit or debug visual reasoning AI systems.

345 stars. No commits in the last 6 months.

Use this if you need to gain insight into how a visual reasoning AI arrives at its conclusions, beyond just getting a correct answer.

Not ideal if you are looking for a general-purpose image recognition or object detection tool without an emphasis on detailed interpretability.

AI explainability visual question answering computer vision research machine learning interpretability
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

345

Forks

74

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 07, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/davidmascharka/tbd-nets"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.