Rust Onnx Runtime Transformer Models
There are 3 rust onnx runtime models tracked. The highest-rated is szheng3/Rust-server-pre-trained-models at 27/100 with 18 stars.
Get all 3 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=rust-onnx-runtime&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
szheng3/Rust-server-pre-trained-models
Rust server that summarizes text with pre-trained models |
|
Experimental |
| 2 |
Kaden-Schutt/hipfire
RDNA-native LLM inference engine in Rust. 59 tok/s Qwen3-8B on RX 5700 XT —... |
|
Experimental |
| 3 |
Frexio/pegainfer
Run efficient LLM inference using a Rust-based engine with custom CUDA... |
|
Experimental |