yashkc2025/low_capacity_nn_behavior

Code for paper "Understanding Generalization, Robustness, and Interpretability in Low-Capacity Neural Networks"

33
/ 100
Emerging

This project helps machine learning engineers and researchers understand how extremely small neural networks can still perform well on image classification tasks. It takes MNIST image data and reveals that even after drastically reducing network weights, these tiny models can classify digits accurately. It also shows that larger, 'overparameterized' networks are more resilient to noisy data, rather than just being more accurate.

No commits in the last 6 months.

Use this if you are exploring the fundamental behavior of neural networks, particularly their efficiency, interpretability, and robustness in resource-constrained environments.

Not ideal if you need a plug-and-play solution for building high-performance, complex AI systems, as this project focuses on foundational research insights rather than immediate application.

neural-network-efficiency model-interpretability model-robustness edge-ai deep-learning-research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Jul 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/yashkc2025/low_capacity_nn_behavior"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.