ort and rten

These are competitors: both provide ONNX inference engines for Rust, with rten offering simpler lightweight inference while ort provides more comprehensive ML operations including training support.

ort
63
Established
rten
50
Established
Maintenance 17/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 10/25
Adoption 20/25
Maturity 8/25
Community 12/25
Stars: 2,068
Forks: 222
Downloads:
Commits (30d): 20
Language: Rust
License: Apache-2.0
Stars: 294
Forks: 18
Downloads: 57,106
Commits (30d): 0
Language: Rust
License:
No Package No Dependents
No License No Package No Dependents

About ort

pykeio/ort

Fast ML inference & training for ONNX models in Rust

This helps machine learning engineers and MLOps professionals efficiently deploy and run pre-trained machine learning models, regardless of where they were originally built (e.g., PyTorch, TensorFlow). It takes an ONNX-formatted model and data as input, producing fast, hardware-accelerated predictions or training updates. This is ideal for those needing to integrate powerful AI capabilities into applications running on user devices or in data centers.

MLOps model deployment AI integration edge AI deep learning inference

About rten

robertknight/rten

ONNX neural network inference engine

This project helps developers integrate pre-trained machine learning models, often created in Python frameworks like PyTorch, directly into Rust applications or web-based JavaScript environments. It takes an ONNX model file as input and allows the application to run the model efficiently, producing predictions or classifications. It's designed for developers building applications where machine learning inference needs to run directly within a Rust-powered backend or a web browser.

machine-learning-inference edge-ai application-development web-development embedded-systems

Scores updated daily from GitHub, PyPI, and npm data. How scores work