onnxruntime and onnx2c
These are ecosystem siblings where one is a widely-adopted runtime for executing ONNX models across platforms, while the other is a specialized compiler that transpiles ONNX models to C code for embedded or resource-constrained deployment scenarios where a full runtime is impractical.
About onnxruntime
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
This helps machine learning engineers and data scientists deploy and train their models more efficiently. It takes trained machine learning models from frameworks like PyTorch or TensorFlow, or classical ML libraries, and outputs faster predictions or quicker training times. It's for anyone building or running ML models who needs to optimize performance across different hardware.
About onnx2c
kraiskil/onnx2c
Open Neural Network Exchange to C compiler.
This tool helps embedded systems developers deploy machine learning models onto microcontrollers efficiently. It takes a pre-trained neural network in the ONNX file format and generates highly optimized C code. The output is a single C file ready for inclusion in a microcontroller project, allowing developers to integrate AI inference without complex dependencies or heavy memory usage.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work