microsoft/onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

93
/ 100
Verified

This helps machine learning engineers and data scientists deploy and train their models more efficiently. It takes trained machine learning models from frameworks like PyTorch or TensorFlow, or classical ML libraries, and outputs faster predictions or quicker training times. It's for anyone building or running ML models who needs to optimize performance across different hardware.

19,534 stars and 474 monthly downloads. Used by 153 other packages. Actively maintained with 172 commits in the last 30 days. Available on PyPI and npm.

Use this if you need to accelerate the speed of your machine learning model's predictions or reduce the time it takes to train large transformer models on GPUs.

Not ideal if you are looking for a tool to build or design your machine learning models from scratch, rather than optimize existing ones.

machine-learning-deployment model-optimization deep-learning-inference ml-model-training data-science-workflow
Maintenance 22 / 25
Adoption 21 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

19,534

Forks

3,759

Language

C++

License

MIT

Last pushed

Mar 13, 2026

Monthly downloads

474

Commits (30d)

172

Dependencies

6

Reverse dependents

153

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/microsoft/onnxruntime"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.