mateoespinosa/cem

Repository for our NeurIPS 2022 paper "Concept Embedding Models", our NeurIPS 2023 paper "Learning to Receive Help", and our ICML 2025 paper "Avoiding Leakage Poisoning"

55
/ 100
Established

This project helps machine learning practitioners build models that are both accurate and understandable. You provide input data and some concept labels (like 'stripes' or 'black' for images), and it produces a model that can make predictions while also explaining its reasoning using those concepts. This is ideal for AI developers and researchers who need transparent AI systems.

Use this if you need to build machine learning models where it's crucial to understand why a prediction was made, not just what the prediction is, even with limited concept data.

Not ideal if your primary concern is raw predictive accuracy at all costs, or if you do not have any high-level concepts to label your data with.

interpretable-AI explainable-AI machine-learning-engineering human-in-the-loop-AI model-transparency
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

73

Forks

24

Language

Python

License

MIT

Last pushed

Jan 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mateoespinosa/cem"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.