Ml Inference Benchmarking Transformer Models
There are 10 ml inference benchmarking models tracked. 1 score above 50 (established tier). The highest-rated is OpenNMT/CTranslate2 at 56/100 with 4,354 stars.
Get all 10 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=ml-inference-benchmarking&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
OpenNMT/CTranslate2
Fast inference engine for Transformer models |
|
Established |
| 2 |
mechramc/Orion
Local AI runtime for training & running small LLMs directly on Apple Neural... |
|
Emerging |
| 3 |
Pomilon/LEMA
LEMA (Layer-wise Efficient Memory Abstraction): A hardware-aware framework... |
|
Experimental |
| 4 |
dilbersha/llm-inference-benchmarking-3080
A production-grade telemetry-aware suite for benchmarking LLM inference... |
|
Experimental |
| 5 |
Yuan-ManX/infera
Infera — A High-Performance Inference Engine for Large Language Models. |
|
Experimental |
| 6 |
gxcsoccer/alloy
Hybrid SSM-Attention language model on Apple Silicon with MLX — interleaving... |
|
Experimental |
| 7 |
timteh/timteh-forge
⚡ TIMTEH Model Forge — Uncensored, abliterated & reasoning-distilled GGUFs.... |
|
Experimental |
| 8 |
GusLovesMath/Llama3_MacSilicon
Repository for running LLMs efficiently on Mac silicon (M1, M2, M3).... |
|
Experimental |
| 9 |
anviit/llm-inference-serving
Production LLM inference stack — 28ms TTFT, 39 tok/s, 81% cache hit rate on a 6GB GPU |
|
Experimental |
| 10 |
metaskills/fast-llama-inference
Exploring Accelerated Compound AI Systems with SambaNova & Llama 3.3-70B |
|
Experimental |