xboot/libonnx
A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
This is a tool for developers who are building applications for embedded devices. It helps them integrate pre-trained ONNX machine learning models into their C99 projects, enabling features like image recognition or sensor data analysis directly on the device. Developers provide an ONNX model file and input data, and the engine outputs the model's predictions or classifications, leveraging hardware acceleration for efficiency.
647 stars. No commits in the last 6 months.
Use this if you are a C99 developer building an application for an embedded device and need to run ONNX machine learning models directly on that hardware.
Not ideal if you are not a C99 developer or if your target environment is not an embedded device.
Stars
647
Forks
116
Language
C
License
MIT
Category
Last pushed
Aug 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xboot/libonnx"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX