letian-zhang/ANS
Autodidactic Neurosurgeon Collaborative Deep Inference for Mobile Edge Intelligence via Online Learning
This helps mobile and edge device developers optimize how deep learning models run across devices and servers. It takes a deep neural network model and dynamically determines the best point to split the model's computation between a mobile device (like an Nvidia Jetson) and an edge server. This is for developers building and deploying AI applications on resource-constrained edge devices.
No commits in the last 6 months.
Use this if you are a developer working with deep neural networks on edge devices and need to automatically optimize their performance by partitioning the model between the device and a server.
Not ideal if you are a data scientist or end-user who just wants to run a pre-trained model without optimizing its deployment architecture.
Stars
42
Forks
9
Language
Python
License
—
Category
Last pushed
Aug 14, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/letian-zhang/ANS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
optuna/optuna
A hyperparameter optimization framework
keras-team/keras-tuner
A Hyperparameter Tuning Library for Keras
KernelTuner/kernel_tuner
Kernel Tuner
syne-tune/syne-tune
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
deephyper/deephyper
DeepHyper: A Python Package for Massively Parallel Hyperparameter Optimization in Machine Learning