itlab-vision/dl-benchmark

Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apache TVM, ncnn, PaddlePaddle, etc.

57
/ 100
Established

This tool helps developers and machine learning engineers compare the execution speed of deep learning models across different hardware and software frameworks. It takes your trained deep learning models and outputs performance metrics like inference speed. Users include MLOps engineers, researchers, and anyone optimizing model deployment.

Use this if you need to objectively measure and compare how fast your deep learning models perform inference on various hardware setups using different frameworks.

Not ideal if you are looking for a tool to train deep learning models or evaluate their accuracy.

deep-learning-deployment model-optimization mlops performance-engineering hardware-evaluation
No Package No Dependents
Maintenance 13 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

35

Forks

37

Language

HTML

License

Apache-2.0

Last pushed

Mar 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/itlab-vision/dl-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.