EdGENetworks/anuvada

Interpretable Models for NLP using PyTorch

31
/ 100
Emerging

When you have a text classification model built with PyTorch and need to understand *why* it made a particular decision, this helps you peek inside. It takes a trained model and text inputs, then highlights the words or phrases most influential in the model's classification. This is ideal for data scientists, machine learning engineers, and researchers working with NLP models who need to ensure fairness, explainability, or diagnose unexpected behavior.

105 stars. No commits in the last 6 months.

Use this if you need to interpret the decision-making process of your deep learning natural language processing models.

Not ideal if your models are not built using PyTorch or if you are not working with text classification tasks.

text-classification model-interpretability NLP-auditing bias-detection machine-learning-explainability
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

105

Forks

12

Language

Python

License

Last pushed

Jan 08, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/EdGENetworks/anuvada"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.