itlab-vision/dl-benchmark
Deep Learning Inference benchmark. Supports OpenVINO™ toolkit, TensorFlow, TensorFlow Lite, ONNX Runtime, OpenCV DNN, MXNet, PyTorch, Apache TVM, ncnn, PaddlePaddle, etc.
This tool helps developers and machine learning engineers compare the execution speed of deep learning models across different hardware and software frameworks. It takes your trained deep learning models and outputs performance metrics like inference speed. Users include MLOps engineers, researchers, and anyone optimizing model deployment.
Use this if you need to objectively measure and compare how fast your deep learning models perform inference on various hardware setups using different frameworks.
Not ideal if you are looking for a tool to train deep learning models or evaluate their accuracy.
Stars
35
Forks
37
Language
HTML
License
Apache-2.0
Category
Last pushed
Mar 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/itlab-vision/dl-benchmark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/training
Reference implementations of MLPerf® training benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning