mlcommons/inference_results_v5.0

This repository contains the results and code for the MLPerf® Inference v5.0 benchmark.

41
/ 100
Emerging

This project compiles official results and code from the MLPerf® Inference v5.0 benchmark. It provides a standardized view of machine learning inference performance across different hardware and software configurations. ML architects, hardware engineers, and performance analysts can use these results to evaluate and compare the efficiency of various ML systems.

Use this if you need to understand the real-world inference performance of various machine learning systems and hardware.

Not ideal if you are looking for general machine learning models or want to run your own custom inference tests without established benchmarks.

AI-benchmarking ML-performance-evaluation hardware-comparison deep-learning-inference system-architecture
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

12

Forks

13

Language

HTML

License

Last pushed

Feb 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlcommons/inference_results_v5.0"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.