rdspring1/LSH_DeepLearning

Scalable and Sustainable Deep Learning via Randomized Hashing

44
/ 100
Emerging

This project helps deep learning engineers train and test neural networks more efficiently and sustainably. It takes your existing deep learning model and complex datasets, significantly reducing the computational load by focusing on the most important parts of the network. This means faster training times and lower energy consumption, making it ideal for those developing or deploying large AI models.

No commits in the last 6 months.

Use this if you are a deep learning engineer struggling with the high computational cost and energy demands of training and testing large neural networks on complex datasets.

Not ideal if you are working with small models or datasets where computational efficiency is not a primary concern, or if you need to maintain 100% of the original model's accuracy without any approximation.

deep-learning-optimization sustainable-AI neural-network-training edge-AI-deployment computational-efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

94

Forks

22

Language

Java

License

Apache-2.0

Last pushed

May 16, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rdspring1/LSH_DeepLearning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.