alstonlo/torch-influence

A simple PyTorch implementation of influence functions.

40
/ 100
Emerging

This tool helps machine learning engineers and researchers understand why their models make specific predictions. By analyzing a trained model and its training data, it tells you which individual training examples most strongly influenced the model's behavior on a given test example. This allows you to identify helpful or harmful data points affecting your model's outcomes.

No commits in the last 6 months.

Use this if you need to debug black-box machine learning models, improve fairness, identify data quality issues like mislabeled data, or understand specific model predictions by tracing them back to their training data origins.

Not ideal if you are looking for a general-purpose model interpretability tool that provides global feature importances or rule-based explanations, as this focuses on individual data point influence.

Machine Learning Explainability Model Debugging Data Quality ML Research Data Analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

92

Forks

12

Language

Python

License

Apache-2.0

Last pushed

Jun 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alstonlo/torch-influence"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.