oneTaken/awesome_deep_learning_interpretability

深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)

49
/ 100
Emerging

This is a curated collection of highly-cited research papers focused on making deep learning models more understandable. It brings together academic publications, many with associated code, that explore how to interpret the decisions and internal workings of complex neural networks. It's designed for researchers and practitioners in machine learning who need to understand why their AI models make certain predictions, especially in fields where transparency is critical.

764 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher or AI practitioner looking for a comprehensive list of influential papers and code related to deep learning interpretability.

Not ideal if you are a business user seeking an out-of-the-box software tool to interpret your existing models without diving into academic literature.

deep-learning-research explainable-ai model-transparency neural-network-analysis AI-ethics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

764

Forks

125

Language

License

MIT

Last pushed

Apr 08, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/oneTaken/awesome_deep_learning_interpretability"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.