mlcommons/mlperf_client
MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.
This tool helps evaluate how well your personal computer runs common AI tasks like chatbots or image recognition. You provide a configuration for your hardware and the AI models you want to test, and it outputs performance metrics showing how fast and efficiently those tasks run. It's designed for hardware enthusiasts, system integrators, or IT professionals who want to understand and compare client-side AI inference performance.
Use this if you need to benchmark the performance of AI inference tasks on client-side devices like Windows or macOS laptops and desktops, using various hardware and software configurations.
Not ideal if you are looking to benchmark AI training performance, server-side inference, or if you need a graphical user interface for detailed analysis (CLI is the primary interface).
Stars
80
Forks
6
Language
C++
License
Apache-2.0
Category
Last pushed
Nov 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlcommons/mlperf_client"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/training
Reference implementations of MLPerf® training benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning