Crisp-Unimib/ContrXT
a tool for comparing the predictions of any text classifiers
This tool helps data scientists, product managers, or analysts understand how their text classification models change over time or differ from each other. You input the training data and predictions from two text classifiers, and it outputs visual indicators and natural language explanations detailing the differences in their classification behaviors. This is for anyone who needs to explain why their text AI models are making certain decisions or have shifted their logic.
No commits in the last 6 months. Available on PyPI.
Use this if you need to compare two different versions of a text classifier or two distinct classifiers, and understand *why* they classify text differently, in plain language.
Not ideal if you are looking for a tool to improve the accuracy or performance of your text classification models directly, as it focuses on explaining behavior changes rather than model optimization.
Stars
27
Forks
2
Language
Python
License
GPL-3.0
Last pushed
Jul 30, 2022
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Crisp-Unimib/ContrXT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...