arnabsanyal/lnsdnn

https://arxiv.org/abs/1910.09876

34
/ 100
Emerging

This project helps developers and researchers train deep neural networks more efficiently on devices with limited computational power. It takes standard neural network models and training datasets as input, and outputs trained models that perform similarly to traditional floating-point models but require significantly less processing power. This is ideal for those creating or deploying AI on edge devices.

No commits in the last 6 months.

Use this if you need to train or deploy deep neural networks on resource-constrained hardware, such as embedded systems or IoT devices, and want to reduce computational complexity while maintaining high accuracy.

Not ideal if you are primarily working with high-performance computing environments where computational resources are abundant and maximizing training speed or precision is paramount over resource efficiency.

embedded-AI edge-computing neural-network-optimization low-power-AI deep-learning-hardware
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

7

Forks

3

Language

Python

License

MIT

Last pushed

Jan 29, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/arnabsanyal/lnsdnn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.