inference and inference_results_v5.1

One tool provides the reference implementations for the MLPerf Inference benchmarks, while the other contains the results and code from a specific version (v5.1) of those benchmarks, making them complements that are used together to understand and apply the benchmark results.

inference
71
Verified
Maintenance 20/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 10/25
Adoption 3/25
Maturity 15/25
Community 17/25
Stars: 1,539
Forks: 612
Downloads:
Commits (30d): 25
Language: Python
License: Apache-2.0
Stars: 3
Forks: 11
Downloads:
Commits (30d): 0
Language: HTML
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About inference

mlcommons/inference

Reference implementations of MLPerf® inference benchmarks

This project offers standardized benchmarks to measure how quickly various systems can run machine learning models across different deployment scenarios. It takes in various machine learning models (like ResNet, BERT, Llama2) and system configurations, providing performance metrics like inference speed. System architects, hardware engineers, and ML platform developers use this to compare and optimize the performance of their AI systems.

AI system performance ML model deployment hardware benchmarking system optimization inference speed evaluation

About inference_results_v5.1

mlcommons/inference_results_v5.1

This repository contains the results and code for the MLPerf® Inference v5.1 benchmark.

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work