khairulislam/Timeseries-Explained
Interpreting Deep Learning timeseries models using Local Interpretation methods
This tool helps data scientists and machine learning engineers understand why their deep learning time series forecasting models make certain predictions. You input a trained multi-horizon time series model and its data, and it outputs explanations (saliency scores) showing which past data points and features were most important for each prediction. This is for professionals building or deploying complex time series models in fields like finance, healthcare, or operations.
No commits in the last 6 months.
Use this if you need to interpret the decision-making process of advanced deep learning models used for forecasting future trends in complex time series data.
Not ideal if you are working with simpler, more transparent forecasting models or if you don't need detailed explanations for model predictions.
Stars
12
Forks
1
Language
Jupyter Notebook
License
MIT
Last pushed
Feb 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/khairulislam/Timeseries-Explained"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...