josephenguehard/time_interpret
Unified Model Interpretability Library for Time Series
This library helps data scientists and machine learning engineers understand why their time series models make certain predictions. You input your time series data and a trained model, and it outputs 'saliency scores' or explanations that highlight which parts of the historical data most influenced the model's forecast or decision. This is crucial for building trust in complex models and debugging unexpected behavior.
No commits in the last 6 months. Available on PyPI.
Use this if you need to explain the decisions of a time series prediction model, such as why a particular stock price prediction was made or what factors led to a specific anomaly detection.
Not ideal if you are working with non-time series data or if you need to build the predictive model itself rather than interpret an existing one.
Stars
72
Forks
10
Language
Python
License
MIT
Last pushed
Sep 25, 2025
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/josephenguehard/time_interpret"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...