Trustworthy-ML-Lab/VLG-CBM
[NeurIPS 24] A new training and evaluation framework for learning interpretable deep vision models and benchmarking different interpretable concept-bottleneck-models (CBMs)
This tool helps researchers and practitioners in AI interpret why a deep learning vision model makes a particular decision. It takes an image dataset and outputs a trained concept bottleneck model that explains its predictions using human-understandable concepts, offering clear insights into the model's reasoning. This is for AI researchers, machine learning engineers, and data scientists who need to understand and debug computer vision models.
No commits in the last 6 months.
Use this if you need to train a computer vision model that provides clear, concept-based explanations for its predictions, rather than just opaque outputs.
Not ideal if you are looking for a plug-and-play solution for general image classification without the need for interpretable concept attribution.
Stars
29
Forks
5
Language
Jupyter Notebook
License
—
Category
Last pushed
Jun 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Trustworthy-ML-Lab/VLG-CBM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MadryLab/context-cite
Attribute (or cite) statements generated by LLMs back to in-context information.
microsoft/augmented-interpretable-models
Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.
Trustworthy-ML-Lab/CB-LLMs
[ICLR 25] A novel framework for building intrinsically interpretable LLMs with...
poloclub/LLM-Attributor
LLM Attributor: Attribute LLM's Generated Text to Training Data
THUDM/LongCite
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA