aallan/benchmarking-ml-on-the-edge
Benchmarking machine learning inferencing on embedded hardware.
This project helps you understand how quickly different machine learning models, like those for object detection, can process information on various small, low-power devices. It takes information about a specific model and the embedded hardware you're considering, then shows you how long it takes for the model to make a decision (its 'inference time'). Engineers and product managers designing or deploying AI solutions on edge devices would use this to pick the best hardware and software combination.
No commits in the last 6 months.
Use this if you need to compare the performance of different embedded boards and machine learning frameworks for running AI models at the 'edge', away from powerful cloud servers.
Not ideal if you are looking for a complete, up-to-date, ready-to-run solution for all edge AI hardware, as some guides and scripts require updates to newer software libraries.
Stars
26
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jul 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aallan/benchmarking-ml-on-the-edge"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/training
Reference implementations of MLPerf® training benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning