tensorflow/tcav

Code for the TCAV ML interpretability project

70
/ 100
Verified

This tool helps you understand what high-level ideas, like gender or color, your AI models are actually using to make predictions, even if those ideas weren't explicitly part of the training data. You provide examples of a concept, and the tool tells you how important that concept is for a given prediction class. It's for anyone who needs to explain AI behavior in human-understandable terms, such as ethicists, auditors, or product managers.

653 stars. Available on PyPI.

Use this if you need to know *why* your neural network makes certain decisions, by understanding the influence of abstract concepts rather than just individual data points.

Not ideal if you only need to see which specific input features, like individual pixels, contributed to a single prediction.

AI-ethics model-auditing explainable-AI bias-detection machine-learning-interpretation
Maintenance 10 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

653

Forks

151

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Feb 05, 2026

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tensorflow/tcav"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.