Trustworthy-ML-Lab/Label-free-CBM

[ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept data

38
/ 100
Emerging

This helps data scientists and machine learning engineers transform existing neural networks into models that explain their decisions using human-understandable concepts, without requiring new, labeled concept data. It takes your trained neural network and a set of concepts (e.g., 'has wings', 'striped') and produces a new, more transparent model that reveals which concepts influence its predictions. This allows users to understand 'why' a model made a specific classification, making it easier to trust and debug.

133 stars. No commits in the last 6 months.

Use this if you need to make your deep learning models' decisions transparent and explainable by identifying the underlying concepts driving their predictions, without the costly effort of manually labeling data for those concepts.

Not ideal if your primary goal is to achieve the absolute highest prediction accuracy, as there might be a slight trade-off when emphasizing interpretability.

AI-explainability model-auditing transparent-AI machine-learning-engineering trustworthy-AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 20 / 25

How are scores calculated?

Stars

133

Forks

31

Language

Jupyter Notebook

License

Last pushed

Mar 31, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trustworthy-ML-Lab/Label-free-CBM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.