Lexsi-Labs/DLBacktrace

DL Backtrace is a new explainablity technique for deep learning models that works for any modality and model type.

47
/ 100
Emerging

This tool helps AI practitioners understand why their deep learning models make certain decisions. You input your trained deep learning model and the data you want to explain, and it outputs relevance scores showing which parts of the input were most important for the model's prediction. Anyone working with deep learning models in areas like vision, text, or tabular data would use this to gain insight into model behavior.

Use this if you need to understand the internal workings and decision-making process of your deep learning models across various data types and architectures.

Not ideal if you are working with traditional machine learning models or primarily need to explain simple, non-deep learning algorithms.

AI-explainability model-auditing deep-learning-insights AI-transparency machine-learning-operations
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

24

Forks

5

Language

Python

License

Last pushed

Feb 16, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Lexsi-Labs/DLBacktrace"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.