mlcommons/storage
MLPerf® Storage Benchmark Suite
This suite helps system administrators and operations engineers evaluate how well their storage systems perform when running machine learning workloads. You provide details about your AI cluster's storage configuration, and it outputs performance metrics for common deep learning tasks like U-Net3D, ResNet-50, and CosmoFlow. This is ideal for those managing infrastructure that supports AI development and training.
175 stars.
Use this if you need to benchmark the performance of your storage infrastructure specifically for deep learning workloads to ensure it meets the demands of your AI training processes.
Not ideal if you are looking to benchmark general-purpose storage performance or evaluating application-level machine learning model performance.
Stars
175
Forks
59
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlcommons/storage"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/training
Reference implementations of MLPerf® training benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning