mims-harvard/TimeX

Time series explainability via self-supervised model behavior consistency

23
/ 100
Experimental

TimeX helps researchers and analysts understand why their time-series classification models make specific predictions. It takes your raw time-series data and the output of your existing classification model, then generates "landmark explanations" that reveal broader predictive patterns. This is ideal for scientists, engineers, or financial analysts who need to interpret complex time-series model decisions.

No commits in the last 6 months.

Use this if you need to understand the underlying reasons behind your time-series classification model's predictions, especially when comparing multiple samples is important.

Not ideal if you are looking for simple, instance-specific explanations or if your data is not time-series based.

time-series-analysis predictive-modeling model-interpretation pattern-recognition decision-support
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

54

Forks

3

Language

Python

License

Last pushed

Oct 22, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mims-harvard/TimeX"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.