microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
This helps machine learning engineers and data scientists deploy and train their models more efficiently. It takes trained machine learning models from frameworks like PyTorch or TensorFlow, or classical ML libraries, and outputs faster predictions or quicker training times. It's for anyone building or running ML models who needs to optimize performance across different hardware.
19,534 stars and 474 monthly downloads. Used by 153 other packages. Actively maintained with 172 commits in the last 30 days. Available on PyPI and npm.
Use this if you need to accelerate the speed of your machine learning model's predictions or reduce the time it takes to train large transformer models on GPUs.
Not ideal if you are looking for a tool to build or design your machine learning models from scratch, rather than optimize existing ones.
Stars
19,534
Forks
3,759
Language
C++
License
MIT
Category
Last pushed
Mar 13, 2026
Monthly downloads
474
Commits (30d)
172
Dependencies
6
Reverse dependents
153
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/microsoft/onnxruntime"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Compare
Related frameworks
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX
microsoft/onnxconverter-common
Common utilities for ONNX converters