openml/automlbenchmark

OpenML AutoML Benchmarking Framework

61
/ 100
Established

This framework helps machine learning researchers and practitioners compare different Automated Machine Learning (AutoML) systems. It takes various AutoML frameworks and a selection of curated datasets as input, then produces standardized evaluation results. Data scientists and ML researchers who want to objectively assess the performance of different AutoML tools would use this.

453 stars.

Use this if you need a standardized and reproducible way to benchmark how well different AutoML systems perform on classification and regression tasks.

Not ideal if you are a business user looking for a low-code tool to quickly build and deploy ML models without evaluating the underlying AutoML system's performance.

machine-learning-research model-evaluation automated-ml predictive-modeling performance-comparison
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

453

Forks

146

Language

Python

License

MIT

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/openml/automlbenchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.