alrevuelta/cONNXr
Pure C ONNX runtime with zero dependancies for embedded devices
This project helps embedded systems developers deploy pre-trained machine learning models on older or resource-constrained hardware. It takes an ONNX-formatted model and a protocol buffer (.pb) input file, then outputs the model's inference results. It's designed for developers working with microcontrollers, IoT devices, or other embedded systems.
216 stars. No commits in the last 6 months.
Use this if you need to run machine learning inference on embedded devices that don't support modern C++ or have zero dependencies.
Not ideal if you require support for a wide range of ONNX operators, various data types beyond float, or a production-ready solution, as this project is in an early development stage.
Stars
216
Forks
34
Language
C
License
MIT
Category
Last pushed
Oct 29, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alrevuelta/cONNXr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX