onnxruntime and onnx

ONNX Runtime is the execution engine that deploys and runs models in the standard format defined by ONNX, making them complements that are typically used together in a deployment pipeline.

onnxruntime
93
Verified
onnx
85
Verified
Maintenance 22/25
Adoption 21/25
Maturity 25/25
Community 25/25
Maintenance 20/25
Adoption 15/25
Maturity 25/25
Community 25/25
Stars: 19,534
Forks: 3,759
Downloads: 474
Commits (30d): 172
Language: C++
License: MIT
Stars: 20,477
Forks: 3,896
Downloads:
Commits (30d): 43
Language: Python
License: Apache-2.0
No risk flags
No risk flags

About onnxruntime

microsoft/onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

This helps machine learning engineers and data scientists deploy and train their models more efficiently. It takes trained machine learning models from frameworks like PyTorch or TensorFlow, or classical ML libraries, and outputs faster predictions or quicker training times. It's for anyone building or running ML models who needs to optimize performance across different hardware.

machine-learning-deployment model-optimization deep-learning-inference ml-model-training data-science-workflow

About onnx

onnx/onnx

Open standard for machine learning interoperability

This project offers an open-source format for AI models, helping AI developers use different machine learning tools interchangeably. It takes an AI model trained in one framework and converts it into a standardized format, allowing it to be used (especially for scoring/inferencing) in another framework or hardware. AI developers who build and deploy machine learning models are the primary users.

AI model deployment machine learning interoperability model inference deep learning AI development

Scores updated daily from GitHub, PyPI, and npm data. How scores work