Trustworthy-ML-Lab/VLG-CBM

[NeurIPS 24] A new training and evaluation framework for learning interpretable deep vision models and benchmarking different interpretable concept-bottleneck-models (CBMs)

31
/ 100
Emerging

This tool helps researchers and practitioners in AI interpret why a deep learning vision model makes a particular decision. It takes an image dataset and outputs a trained concept bottleneck model that explains its predictions using human-understandable concepts, offering clear insights into the model's reasoning. This is for AI researchers, machine learning engineers, and data scientists who need to understand and debug computer vision models.

No commits in the last 6 months.

Use this if you need to train a computer vision model that provides clear, concept-based explanations for its predictions, rather than just opaque outputs.

Not ideal if you are looking for a plug-and-play solution for general image classification without the need for interpretable concept attribution.

interpretable AI explainable AI computer vision deep learning model debugging
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

29

Forks

5

Language

Jupyter Notebook

License

Last pushed

Jun 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Trustworthy-ML-Lab/VLG-CBM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.