fursovia/tcav_nlp

"Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)" paper implementation

26
/ 100
Experimental

This project helps data scientists and researchers understand why their natural language processing (NLP) models make certain predictions. You provide text data and a trained text classification model, along with specific words or phrases (concepts) you want to investigate. The tool then reveals how strongly these concepts influence the model's decisions for different categories, giving you interpretable insights beyond just overall accuracy.

No commits in the last 6 months.

Use this if you need to explain the reasoning behind your NLP model's classifications, particularly how specific real-world ideas or entities (like 'democracy' or 'Russia') impact its predictions.

Not ideal if you are looking for simple feature importance scores, or if you don't have a trained NLP text classification model and concept examples ready.

NLP-interpretability model-explainability text-classification AI-auditing research-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

8

Forks

3

Language

Jupyter Notebook

License

Last pushed

Mar 22, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/fursovia/tcav_nlp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.