inference and inference_results_v5.1
One tool provides the reference implementations for the MLPerf Inference benchmarks, while the other contains the results and code from a specific version (v5.1) of those benchmarks, making them complements that are used together to understand and apply the benchmark results.
About inference
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
This project offers standardized benchmarks to measure how quickly various systems can run machine learning models across different deployment scenarios. It takes in various machine learning models (like ResNet, BERT, Llama2) and system configurations, providing performance metrics like inference speed. System architects, hardware engineers, and ML platform developers use this to compare and optimize the performance of their AI systems.
About inference_results_v5.1
mlcommons/inference_results_v5.1
This repository contains the results and code for the MLPerf® Inference v5.1 benchmark.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work