iwatake2222/InferenceHelper

C++ Helper Class for Deep Learning Inference Frameworks: TensorFlow Lite, TensorRT, OpenCV, OpenVINO, ncnn, MNN, SNPE, Arm NN, NNabla, ONNX Runtime, LibTorch, TensorFlow

49
/ 100
Emerging

Building applications that use deep learning models for tasks like image recognition or natural language processing often means dealing with many different backend frameworks. This tool provides a consistent way to integrate various deep learning inference engines (like TensorFlow Lite, TensorRT, or ONNX Runtime) into your C++ projects. It takes a trained deep learning model and allows you to run its predictions using a standardized interface, no matter which backend you choose. This is ideal for C++ application developers who need to deploy AI models efficiently.

297 stars. No commits in the last 6 months.

Use this if you are a C++ developer building an application that needs to use deep learning models and want the flexibility to switch between or support multiple inference frameworks and hardware platforms without rewriting core logic.

Not ideal if you are developing models in Python or primarily focused on training deep learning models rather than deploying them in a C++ application.

deep-learning-deployment edge-ai computer-vision ai-inference embedded-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

297

Forks

67

Language

C++

License

Apache-2.0

Last pushed

Apr 09, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iwatake2222/InferenceHelper"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.