cdpierse/transformers-interpret

Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.

44
/ 100
Emerging

This tool helps data scientists, machine learning engineers, and researchers understand why their AI models make specific predictions on text or image data. It takes an existing "transformer" model and a piece of data, then shows which parts of the input most strongly influenced the model's decision, making complex AI behavior more transparent. This is crucial for debugging models, building trust, and ensuring fairness.

1,413 stars. No commits in the last 6 months.

Use this if you need to explain the reasoning behind predictions made by a transformer-based AI model, whether for a single piece of text, a pair of texts, or an image.

Not ideal if you are working with non-transformer models, traditional machine learning algorithms, or tabular data, as it's specifically built for the Hugging Face Transformers ecosystem.

AI-explainability NLP-model-auditing computer-vision-debugging model-transparency responsible-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

1,413

Forks

100

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Aug 30, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cdpierse/transformers-interpret"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.