lkopf/cosy

[NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.

32
/ 100
Emerging

CoSy helps AI researchers and practitioners quantitatively assess the quality of textual explanations generated for individual neurons within deep neural networks. It takes existing text descriptions of neuron functions and a control dataset, then generates synthetic data points to test how well the neuron responds to its explanation. The output is a score indicating the explanation's quality, allowing users to compare different explanation methods.

Use this if you are developing or using AI explanation methods and need a standardized, architecture-agnostic way to evaluate how well text descriptions truly represent what a neuron is detecting.

Not ideal if you are looking for methods to *generate* neuron explanations, or if you need to explain an entire model's decision process rather than individual neuron functions.

Explainable AI Deep Learning Evaluation Neural Network Interpretation AI Model Debugging Computer Vision Research
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

19

Forks

2

Language

Jupyter Notebook

License

Last pushed

Jan 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lkopf/cosy"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.