onnxruntime and onnx2c

These are ecosystem siblings where one is a widely-adopted runtime for executing ONNX models across platforms, while the other is a specialized compiler that transpiles ONNX models to C code for embedded or resource-constrained deployment scenarios where a full runtime is impractical.

onnxruntime
93
Verified
onnx2c
58
Established
Maintenance 22/25
Adoption 21/25
Maturity 25/25
Community 25/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 22/25
Stars: 19,534
Forks: 3,759
Downloads: 474
Commits (30d): 172
Language: C++
License: MIT
Stars: 368
Forks: 67
Downloads:
Commits (30d): 0
Language: C
License:
No risk flags
No Package No Dependents

About onnxruntime

microsoft/onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

This helps machine learning engineers and data scientists deploy and train their models more efficiently. It takes trained machine learning models from frameworks like PyTorch or TensorFlow, or classical ML libraries, and outputs faster predictions or quicker training times. It's for anyone building or running ML models who needs to optimize performance across different hardware.

machine-learning-deployment model-optimization deep-learning-inference ml-model-training data-science-workflow

About onnx2c

kraiskil/onnx2c

Open Neural Network Exchange to C compiler.

This tool helps embedded systems developers deploy machine learning models onto microcontrollers efficiently. It takes a pre-trained neural network in the ONNX file format and generates highly optimized C code. The output is a single C file ready for inclusion in a microcontroller project, allowing developers to integrate AI inference without complex dependencies or heavy memory usage.

embedded-systems microcontroller-programming edge-ai machine-learning-deployment tiny-ml

Scores updated daily from GitHub, PyPI, and npm data. How scores work