alstonlo/torch-influence
A simple PyTorch implementation of influence functions.
This tool helps machine learning engineers and researchers understand why their models make specific predictions. By analyzing a trained model and its training data, it tells you which individual training examples most strongly influenced the model's behavior on a given test example. This allows you to identify helpful or harmful data points affecting your model's outcomes.
No commits in the last 6 months.
Use this if you need to debug black-box machine learning models, improve fairness, identify data quality issues like mislabeled data, or understand specific model predictions by tracing them back to their training data origins.
Not ideal if you are looking for a general-purpose model interpretability tool that provides global feature importances or rule-based explanations, as this focuses on individual data point influence.
Stars
92
Forks
12
Language
Python
License
Apache-2.0
Last pushed
Jun 17, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alstonlo/torch-influence"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...