openml/automlbenchmark
OpenML AutoML Benchmarking Framework
This framework helps machine learning researchers and practitioners compare different Automated Machine Learning (AutoML) systems. It takes various AutoML frameworks and a selection of curated datasets as input, then produces standardized evaluation results. Data scientists and ML researchers who want to objectively assess the performance of different AutoML tools would use this.
453 stars.
Use this if you need a standardized and reproducible way to benchmark how well different AutoML systems perform on classification and regression tasks.
Not ideal if you are a business user looking for a low-code tool to quickly build and deploy ML models without evaluating the underlying AutoML system's performance.
Stars
453
Forks
146
Language
Python
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/openml/automlbenchmark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
process-intelligence-solutions/pm4py
Official public repository for PM4Py (Process Mining for Python) — an open-source library for...
autogluon/autogluon
Fast and Accurate ML in 3 Lines of Code
microsoft/FLAML
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
shankarpandala/lazypredict
Lazy Predict help build a lot of basic models without much code and helps understand which...
alteryx/evalml
EvalML is an AutoML library written in python.