josephenguehard/time_interpret

Unified Model Interpretability Library for Time Series

50
/ 100
Established

This library helps data scientists and machine learning engineers understand why their time series models make certain predictions. You input your time series data and a trained model, and it outputs 'saliency scores' or explanations that highlight which parts of the historical data most influenced the model's forecast or decision. This is crucial for building trust in complex models and debugging unexpected behavior.

No commits in the last 6 months. Available on PyPI.

Use this if you need to explain the decisions of a time series prediction model, such as why a particular stock price prediction was made or what factors led to a specific anomaly detection.

Not ideal if you are working with non-time series data or if you need to build the predictive model itself rather than interpret an existing one.

time-series-analysis machine-learning-explainability predictive-modeling model-auditing
Stale 6m
Maintenance 2 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 14 / 25

How are scores calculated?

Stars

72

Forks

10

Language

Python

License

MIT

Last pushed

Sep 25, 2025

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/josephenguehard/time_interpret"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.