machinelearningnuremberg/HPO-B

[NeurIPS DBT 2021] HPO-B

47
/ 100
Emerging

This is a benchmark for machine learning researchers and practitioners to systematically compare and evaluate the effectiveness of different hyperparameter optimization (HPO) algorithms. It provides a standardized collection of past machine learning model evaluations, allowing you to test how well a new HPO method selects optimal model settings. You feed in your new HPO algorithm, and it returns a list of maximum accuracies achieved over a series of trials, showing how your algorithm performs against established benchmarks.

Use this if you are developing new hyperparameter optimization algorithms and need a reliable, standardized way to compare their performance against a diverse set of real-world machine learning tasks and datasets.

Not ideal if you are looking for a tool to perform hyperparameter optimization for your own machine learning models; this is for benchmarking HPO algorithms themselves.

machine-learning-research algorithm-benchmarking hyperparameter-optimization model-evaluation performance-comparison
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

41

Forks

12

Language

Python

License

MIT

Last pushed

Nov 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/machinelearningnuremberg/HPO-B"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.