mlcommons/training_results_v4.0
This repository contains the results and code for the MLPerf™ Training v4.0 benchmark.
This repository is for anyone interested in comparing the performance of different AI training systems. It provides detailed results from standardized MLPerf benchmarks, showing how various hardware and software configurations perform when training common machine learning models. Researchers, IT managers, and system architects who need to evaluate or select AI infrastructure would find this useful.
No commits in the last 6 months.
Use this if you need to understand or demonstrate the speed and efficiency of AI training hardware and software configurations.
Not ideal if you are looking for code to develop your own machine learning models or run your own custom benchmarks.
Stars
12
Forks
17
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlcommons/training_results_v4.0"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/training
Reference implementations of MLPerf® training benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning