oneTaken/awesome_deep_learning_interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
This is a curated collection of highly-cited research papers focused on making deep learning models more understandable. It brings together academic publications, many with associated code, that explore how to interpret the decisions and internal workings of complex neural networks. It's designed for researchers and practitioners in machine learning who need to understand why their AI models make certain predictions, especially in fields where transparency is critical.
764 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher or AI practitioner looking for a comprehensive list of influential papers and code related to deep learning interpretability.
Not ideal if you are a business user seeking an out-of-the-box software tool to interpret your existing models without diving into academic literature.
Stars
764
Forks
125
Language
—
License
MIT
Last pushed
Apr 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/oneTaken/awesome_deep_learning_interpretability"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
tensorflow/tcav
Code for the TCAV ML interpretability project
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
TeamHG-Memex/eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...