tensorflow/tcav
Code for the TCAV ML interpretability project
This tool helps you understand what high-level ideas, like gender or color, your AI models are actually using to make predictions, even if those ideas weren't explicitly part of the training data. You provide examples of a concept, and the tool tells you how important that concept is for a given prediction class. It's for anyone who needs to explain AI behavior in human-understandable terms, such as ethicists, auditors, or product managers.
653 stars. Available on PyPI.
Use this if you need to know *why* your neural network makes certain decisions, by understanding the influence of abstract concepts rather than just individual data points.
Not ideal if you only need to see which specific input features, like individual pixels, contributed to a single prediction.
Stars
653
Forks
151
Language
Jupyter Notebook
License
Apache-2.0
Last pushed
Feb 05, 2026
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tensorflow/tcav"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...
ModelOriented/DALEX
moDel Agnostic Language for Exploration and eXplanation