XiaoMi/mobile-ai-bench

Benchmarking Neural Network Inference on Mobile Devices

46
/ 100
Emerging

This tool helps mobile application or IoT device developers evaluate how well their deep learning models will run on various hardware and software configurations. It takes a deep learning model and outputs its speed and accuracy across different mobile chips and inference frameworks. Developers use this to make informed decisions about which combination of hardware and software is most cost-effective for deploying their models on mobile or IoT devices.

386 stars. No commits in the last 6 months.

Use this if you are a developer deploying deep learning models to mobile or IoT devices and need to compare the performance and accuracy of different hardware (chips/boards) and software (inference frameworks) solutions.

Not ideal if you are looking for a general-purpose machine learning benchmarking tool for servers or cloud environments rather than mobile or IoT-specific deployments.

mobile-ai-deployment edge-ai deep-learning-inference iot-ai performance-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

386

Forks

58

Language

C++

License

Apache-2.0

Last pushed

Apr 10, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/XiaoMi/mobile-ai-bench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.