mateoespinosa/cem
Repository for our NeurIPS 2022 paper "Concept Embedding Models", our NeurIPS 2023 paper "Learning to Receive Help", and our ICML 2025 paper "Avoiding Leakage Poisoning"
This project helps machine learning practitioners build models that are both accurate and understandable. You provide input data and some concept labels (like 'stripes' or 'black' for images), and it produces a model that can make predictions while also explaining its reasoning using those concepts. This is ideal for AI developers and researchers who need transparent AI systems.
Use this if you need to build machine learning models where it's crucial to understand why a prediction was made, not just what the prediction is, even with limited concept data.
Not ideal if your primary concern is raw predictive accuracy at all costs, or if you do not have any high-level concepts to label your data with.
Stars
73
Forks
24
Language
Python
License
MIT
Last pushed
Jan 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mateoespinosa/cem"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...