mlcommons/mlperf_client

MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.

41
/ 100
Emerging

This tool helps evaluate how well your personal computer runs common AI tasks like chatbots or image recognition. You provide a configuration for your hardware and the AI models you want to test, and it outputs performance metrics showing how fast and efficiently those tasks run. It's designed for hardware enthusiasts, system integrators, or IT professionals who want to understand and compare client-side AI inference performance.

Use this if you need to benchmark the performance of AI inference tasks on client-side devices like Windows or macOS laptops and desktops, using various hardware and software configurations.

Not ideal if you are looking to benchmark AI training performance, server-side inference, or if you need a graphical user interface for detailed analysis (CLI is the primary interface).

AI-inference-benchmarking personal-computer-performance hardware-evaluation client-device-testing AI-model-performance
No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

80

Forks

6

Language

C++

License

Apache-2.0

Last pushed

Nov 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlcommons/mlperf_client"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.