lkopf/cosy
[NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.
CoSy helps AI researchers and practitioners quantitatively assess the quality of textual explanations generated for individual neurons within deep neural networks. It takes existing text descriptions of neuron functions and a control dataset, then generates synthetic data points to test how well the neuron responds to its explanation. The output is a score indicating the explanation's quality, allowing users to compare different explanation methods.
Use this if you are developing or using AI explanation methods and need a standardized, architecture-agnostic way to evaluate how well text descriptions truly represent what a neuron is detecting.
Not ideal if you are looking for methods to *generate* neuron explanations, or if you need to explain an entire model's decision process rather than individual neuron functions.
Stars
19
Forks
2
Language
Jupyter Notebook
License
—
Last pushed
Jan 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lkopf/cosy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...