qora-protocol/QORA-LLM-3B
Pure Rust inference engine for the SmolLM3-3B language model. No Python runtime, no CUDA, no external dependencies. Single executable + quantized weights = portable AI on any machine.
Stars
1
Forks
—
Language
Rust
License
Apache-2.0
Category
Last pushed
Mar 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/qora-protocol/QORA-LLM-3B"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
EricLBuehler/mistral.rs
Fast, flexible LLM inference
nerdai/llms-from-scratch-rs
A comprehensive Rust translation of the code from Sebastian Raschka's Build an LLM from Scratch book.
brontoguana/krasis
Krasis is a Hybrid LLM runtime which focuses on efficient running of larger models on consumer...
ShelbyJenkins/llm_utils
llm_utils: Basic LLM tools, best practices, and minimal abstraction.
Mattbusel/llm-wasm
LLM inference primitives for WebAssembly — cache, retry, routing, guards, cost tracking, templates