yas-sim/OpenVINO_Asynchronous_API_Performance_Demo
This project demonstrates the high performance of OpenVINO asynchronous inference API
This project helps software developers who are building applications that use deep learning models for tasks like image classification or object detection. It demonstrates how to process many images or video frames very quickly and efficiently. By showing how to configure your application to use all available processing power from CPUs, GPUs, or NPUs, it takes in raw data for inference and outputs optimized, high-speed model predictions.
No commits in the last 6 months.
Use this if you are a developer building a high-performance deep learning application where processing speed and efficient hardware utilization are critical.
Not ideal if you are a non-developer seeking a ready-to-use application, as this is a technical demonstration for software engineers.
Stars
7
Forks
—
Language
Python
License
—
Category
Last pushed
Apr 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/yas-sim/OpenVINO_Asynchronous_API_Performance_Demo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX