XiaoMi/mobile-ai-bench
Benchmarking Neural Network Inference on Mobile Devices
This tool helps mobile application or IoT device developers evaluate how well their deep learning models will run on various hardware and software configurations. It takes a deep learning model and outputs its speed and accuracy across different mobile chips and inference frameworks. Developers use this to make informed decisions about which combination of hardware and software is most cost-effective for deploying their models on mobile or IoT devices.
386 stars. No commits in the last 6 months.
Use this if you are a developer deploying deep learning models to mobile or IoT devices and need to compare the performance and accuracy of different hardware (chips/boards) and software (inference frameworks) solutions.
Not ideal if you are looking for a general-purpose machine learning benchmarking tool for servers or cloud environments rather than mobile or IoT-specific deployments.
Stars
386
Forks
58
Language
C++
License
Apache-2.0
Category
Last pushed
Apr 10, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/XiaoMi/mobile-ai-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/training
Reference implementations of MLPerf® training benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning