mlomb/onnx2code
Convert ONNX models to plain C++ code (without dependencies)
This tool helps machine learning engineers or embedded systems developers convert trained ONNX (Open Neural Network Exchange) models into standalone C++ code. You provide an ONNX model file as input, and it generates C++ source files that can be compiled and run without any external dependencies. This is ideal for deploying AI models on resource-constrained devices or in environments where complex dependencies are undesirable.
No commits in the last 6 months.
Use this if you need to integrate a machine learning model into a C++ application with minimal dependencies, especially for embedded systems or high-performance computing.
Not ideal if your ONNX model uses advanced operations not listed as supported, requires quantized inference, or you are comfortable with existing ONNX runtime libraries.
Stars
22
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlomb/onnx2code"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX