Lexsi-Labs/DLBacktrace
DL Backtrace is a new explainablity technique for deep learning models that works for any modality and model type.
This tool helps AI practitioners understand why their deep learning models make certain decisions. You input your trained deep learning model and the data you want to explain, and it outputs relevance scores showing which parts of the input were most important for the model's prediction. Anyone working with deep learning models in areas like vision, text, or tabular data would use this to gain insight into model behavior.
Use this if you need to understand the internal workings and decision-making process of your deep learning models across various data types and architectures.
Not ideal if you are working with traditional machine learning models or primarily need to explain simple, non-deep learning algorithms.
Stars
24
Forks
5
Language
Python
License
—
Last pushed
Feb 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Lexsi-Labs/DLBacktrace"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...