g8a9/ferret

A python package for benchmarking interpretability techniques on Transformers.

39
/ 100
Emerging

This project helps machine learning engineers and researchers understand why their Transformer-based text models make specific decisions. You input your text model and some example text, and it outputs explanations showing which words were most important, along with benchmark scores evaluating how trustworthy those explanations are. This allows you to compare different explanation methods and choose the most reliable one for your application.

215 stars. No commits in the last 6 months.

Use this if you need to evaluate and compare different interpretability techniques to confidently explain the predictions of your Transformer text models.

Not ideal if you are working with non-textual data or require explanations for model architectures other than Transformers.

natural-language-processing model-interpretability machine-learning-auditing text-classification responsible-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

215

Forks

17

Language

Python

License

MIT

Last pushed

Sep 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/g8a9/ferret"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.