letian-zhang/ANS

Autodidactic Neurosurgeon Collaborative Deep Inference for Mobile Edge Intelligence via Online Learning

33
/ 100
Emerging

This helps mobile and edge device developers optimize how deep learning models run across devices and servers. It takes a deep neural network model and dynamically determines the best point to split the model's computation between a mobile device (like an Nvidia Jetson) and an edge server. This is for developers building and deploying AI applications on resource-constrained edge devices.

No commits in the last 6 months.

Use this if you are a developer working with deep neural networks on edge devices and need to automatically optimize their performance by partitioning the model between the device and a server.

Not ideal if you are a data scientist or end-user who just wants to run a pre-trained model without optimizing its deployment architecture.

edge-computing mobile-AI deep-learning-deployment model-optimization distributed-inference
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

42

Forks

9

Language

Python

License

Last pushed

Aug 14, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/letian-zhang/ANS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.