zpye/SimpleInfer
A simple neural network inference framework
This framework helps C++ and Python developers integrate pre-trained neural network models into their applications. It takes a trained model and new input data, then outputs the model's predictions or classifications. It's used by developers building applications that need to perform tasks like object detection (e.g., identifying cars or people in images) efficiently.
No commits in the last 6 months.
Use this if you are a C++ or Python developer who needs to incorporate neural network inference capabilities, especially for computer vision tasks, into your applications with a focus on performance.
Not ideal if you are a data scientist or machine learning engineer focused on training new models, or if you need a high-level API for rapid prototyping without deep integration.
Stars
25
Forks
2
Language
C++
License
MIT
Category
Last pushed
Aug 01, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/zpye/SimpleInfer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX