olilarkin/ort-builder
ONNX Runtime static library builder
This project helps C++ developers integrate ONNX machine learning models into their applications, especially for Apple platforms. It takes an ONNX model file and outputs a highly optimized, 'slimmed-down' ONNX Runtime static library or an xcframework, along with C++ source code to embed the model directly. This is ideal for developers building applications where a minimal footprint and fast inference are critical.
No commits in the last 6 months.
Use this if you are a C++ developer building an application for Apple platforms (macOS, iOS) and need to embed an ONNX model efficiently, reducing the size of the inference engine.
Not ideal if you are developing an audio plugin for a Digital Audio Workstation (DAW) and are concerned about potential symbol conflicts with the host application.
Stars
74
Forks
19
Language
C++
License
MIT
Category
Last pushed
Apr 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/olilarkin/ort-builder"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX