kraiskil/onnx2c
Open Neural Network Exchange to C compiler.
This tool helps embedded systems developers deploy machine learning models onto microcontrollers efficiently. It takes a pre-trained neural network in the ONNX file format and generates highly optimized C code. The output is a single C file ready for inclusion in a microcontroller project, allowing developers to integrate AI inference without complex dependencies or heavy memory usage.
368 stars.
Use this if you need to run machine learning inference on resource-constrained microcontrollers and have your trained model available as an ONNX file.
Not ideal if your neural network requires on-device training (backpropagation) or if you plan to use specialized hardware accelerators, as it focuses solely on inference for TinyML environments.
Stars
368
Forks
67
Language
C
License
—
Category
Last pushed
Feb 07, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kraiskil/onnx2c"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX