mohsenfayyaz/DecompX
DecompX: Explaining Transformers Decisions by Propagating Token Decomposition [ACL 2023]
This tool helps AI practitioners understand why their Transformer-based language models make specific decisions. It takes a trained Transformer model and an input text, then shows how each word or token contributes to the model's final prediction. Data scientists, machine learning engineers, and AI researchers can use this to debug models or gain insights into their behavior.
No commits in the last 6 months.
Use this if you need to precisely understand the individual word contributions to a Transformer model's output, especially for debugging or explaining model predictions in natural language processing tasks.
Not ideal if you need a high-level, generalized explanation of model behavior rather than granular, token-level insights, or if your models are not Transformer-based.
Stars
19
Forks
2
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/mohsenfayyaz/DecompX"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
rmovva/HypotheSAEs
HypotheSAEs: hypothesizing interpretable relationships in text datasets using sparse...
interpretml/interpret-text
A library that incorporates state-of-the-art explainers for text-based machine learning models...
fdalvi/NeuroX
A Python library that encapsulates various methods for neuron interpretation and analysis in...
jalammar/ecco
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations...
alexdyysp/ESIM-pytorch
中国高校计算机大赛--大数据挑战赛