aallan/benchmarking-ml-on-the-edge

Benchmarking machine learning inferencing on embedded hardware.

29
/ 100
Experimental

This project helps you understand how quickly different machine learning models, like those for object detection, can process information on various small, low-power devices. It takes information about a specific model and the embedded hardware you're considering, then shows you how long it takes for the model to make a decision (its 'inference time'). Engineers and product managers designing or deploying AI solutions on edge devices would use this to pick the best hardware and software combination.

No commits in the last 6 months.

Use this if you need to compare the performance of different embedded boards and machine learning frameworks for running AI models at the 'edge', away from powerful cloud servers.

Not ideal if you are looking for a complete, up-to-date, ready-to-run solution for all edge AI hardware, as some guides and scripts require updates to newer software libraries.

edge-ai embedded-systems machine-learning-deployment hardware-evaluation device-optimization
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

26

Forks

1

Language

Python

License

MIT

Last pushed

Jul 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aallan/benchmarking-ml-on-the-edge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.