mims-harvard/TimeX
Time series explainability via self-supervised model behavior consistency
TimeX helps researchers and analysts understand why their time-series classification models make specific predictions. It takes your raw time-series data and the output of your existing classification model, then generates "landmark explanations" that reveal broader predictive patterns. This is ideal for scientists, engineers, or financial analysts who need to interpret complex time-series model decisions.
No commits in the last 6 months.
Use this if you need to understand the underlying reasons behind your time-series classification model's predictions, especially when comparing multiple samples is important.
Not ideal if you are looking for simple, instance-specific explanations or if your data is not time-series based.
Stars
54
Forks
3
Language
Python
License
—
Last pushed
Oct 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mims-harvard/TimeX"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...