iwatake2222/InferenceHelper
C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ONNX Runtime, LibTorch, TensorFlow
Building applications that use deep learning models for tasks like image recognition or natural language processing often means dealing with many different backend frameworks. This tool provides a consistent way to integrate various deep learning inference engines (like TensorFlow Lite, TensorRT, or ONNX Runtime) into your C++ projects. It takes a trained deep learning model and allows you to run its predictions using a standardized interface, no matter which backend you choose. This is ideal for C++ application developers who need to deploy AI models efficiently.
297 stars. No commits in the last 6 months.
Use this if you are a C++ developer building an application that needs to use deep learning models and want the flexibility to switch between or support multiple inference frameworks and hardware platforms without rewriting core logic.
Not ideal if you are developing models in Python or primarily focused on training deep learning models rather than deploying them in a C++ application.
Stars
297
Forks
67
Language
C++
License
Apache-2.0
Category
Last pushed
Apr 09, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iwatake2222/InferenceHelper"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX