mohsenfayyaz/DecompX

DecompX: Explaining Transformers Decisions by Propagating Token Decomposition [ACL 2023]

32
/ 100
Emerging

This tool helps AI practitioners understand why their Transformer-based language models make specific decisions. It takes a trained Transformer model and an input text, then shows how each word or token contributes to the model's final prediction. Data scientists, machine learning engineers, and AI researchers can use this to debug models or gain insights into their behavior.

No commits in the last 6 months.

Use this if you need to precisely understand the individual word contributions to a Transformer model's output, especially for debugging or explaining model predictions in natural language processing tasks.

Not ideal if you need a high-level, generalized explanation of model behavior rather than granular, token-level insights, or if your models are not Transformer-based.

AI explainability Natural Language Processing Machine Learning Debugging Model Interpretation Transformer Models
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

19

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/mohsenfayyaz/DecompX"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.