OAID/AutoKernel

AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。

45
/ 100
Emerging

Building and deploying deep learning models often means slow performance on different hardware. This tool helps optimize the fundamental operations (kernels) within neural networks, automatically generating high-performance, low-level code for various hardware like CPUs, GPUs, and specialized accelerators. It's for embedded systems engineers or AI deployment specialists who need to get deep learning models running as fast as possible on target devices.

743 stars. No commits in the last 6 months.

Use this if you need to significantly improve the execution speed of deep learning algorithms on diverse hardware platforms without manually writing complex, low-level optimization code.

Not ideal if you are a data scientist primarily focused on model training and experimentation, and are not concerned with model deployment performance on specific hardware.

deep-learning-deployment edge-ai embedded-systems hardware-acceleration model-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

743

Forks

82

Language

C++

License

Apache-2.0

Last pushed

Sep 23, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/OAID/AutoKernel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.