xianglinyang/TimeVis
Official source code for IJCAI 2022 Paper: Temporality Spatialization: A Scalable and Faithful Time-Travelling Visualization for Deep Classifier Training
This project helps machine learning researchers and practitioners understand how their deep learning models learn over time. It takes snapshots of a deep classifier model's state at different training epochs, along with the corresponding training and testing data. The output is a "time-travelling" visualization that reveals the evolution of the model's decision boundaries and data representations.
No commits in the last 6 months.
Use this if you need to visualize and analyze the training process of a deep classifier to understand why it makes certain predictions or how its internal representations change across epochs.
Not ideal if you are working with non-classification models, or if you need real-time monitoring of model training rather than a post-hoc analysis.
Stars
9
Forks
2
Language
Jupyter Notebook
License
—
Last pushed
May 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xianglinyang/TimeVis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...