vertexclique/orkhon

Orkhon: ML Inference Framework and Server Runtime

34
/ 100
Emerging

When deploying machine learning models to production, you often need to serve predictions efficiently. This tool helps software engineers quickly and reliably integrate existing Python machine learning models or frozen TensorFlow and ONNX models into high-performance Rust applications. It takes your pre-trained models and new data, then rapidly returns predictions.

151 stars and 10 monthly downloads. No commits in the last 6 months.

Use this if you are a software engineer building backend services in Rust and need to serve machine learning model predictions with high throughput and low latency, especially when integrating Python-trained models.

Not ideal if you are solely working within a Python environment or if you do not require the performance benefits of a Rust-based inference server.

ML model serving real-time inference backend development production deployment system architecture
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 12 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

151

Forks

4

Language

Rust

License

MIT

Last pushed

Feb 01, 2021

Monthly downloads

10

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vertexclique/orkhon"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.