Ml Inference Benchmarking Transformer Models

There are 10 ml inference benchmarking models tracked. 1 score above 50 (established tier). The highest-rated is OpenNMT/CTranslate2 at 56/100 with 4,354 stars.

Get all 10 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=ml-inference-benchmarking&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Model Score Tier
1 OpenNMT/CTranslate2

Fast inference engine for Transformer models

56
Established
2 mechramc/Orion

Local AI runtime for training & running small LLMs directly on Apple Neural...

34
Emerging
3 Pomilon/LEMA

LEMA (Layer-wise Efficient Memory Abstraction): A hardware-aware framework...

25
Experimental
4 dilbersha/llm-inference-benchmarking-3080

A production-grade telemetry-aware suite for benchmarking LLM inference...

25
Experimental
5 Yuan-ManX/infera

Infera — A High-Performance Inference Engine for Large Language Models.

25
Experimental
6 gxcsoccer/alloy

Hybrid SSM-Attention language model on Apple Silicon with MLX — interleaving...

24
Experimental
7 timteh/timteh-forge

⚡ TIMTEH Model Forge — Uncensored, abliterated & reasoning-distilled GGUFs....

22
Experimental
8 GusLovesMath/Llama3_MacSilicon

Repository for running LLMs efficiently on Mac silicon (M1, M2, M3)....

20
Experimental
9 anviit/llm-inference-serving

Production LLM inference stack — 28ms TTFT, 39 tok/s, 81% cache hit rate on a 6GB GPU

14
Experimental
10 metaskills/fast-llama-inference

Exploring Accelerated Compound AI Systems with SambaNova & Llama 3.3-70B

13
Experimental