Trustworthy-ML-Lab/Label-free-CBM
[ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept data
This helps data scientists and machine learning engineers transform existing neural networks into models that explain their decisions using human-understandable concepts, without requiring new, labeled concept data. It takes your trained neural network and a set of concepts (e.g., 'has wings', 'striped') and produces a new, more transparent model that reveals which concepts influence its predictions. This allows users to understand 'why' a model made a specific classification, making it easier to trust and debug.
133 stars. No commits in the last 6 months.
Use this if you need to make your deep learning models' decisions transparent and explainable by identifying the underlying concepts driving their predictions, without the costly effort of manually labeling data for those concepts.
Not ideal if your primary goal is to achieve the absolute highest prediction accuracy, as there might be a slight trade-off when emphasizing interpretability.
Stars
133
Forks
31
Language
Jupyter Notebook
License
—
Last pushed
Mar 31, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trustworthy-ML-Lab/Label-free-CBM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...