nicholaslourie/opda

Design and analyze optimal deep learning models.

40
/ 100
Emerging

This tool helps machine learning engineers and researchers rigorously evaluate the true performance of their deep learning models. By analyzing model performance across various hyperparameter tuning efforts, it shows whether changes genuinely improve outcomes, what data or existing hyperparameters a new hyperparameter interacts with, and the best possible score a model can achieve. You input results from random hyperparameter searches and get out statistical analyses and visualizations of tuning curves, complete with confidence bands.

No commits in the last 6 months. Available on PyPI.

Use this if you need to statistically determine if a change to your deep learning model or its hyperparameters actually improves performance, especially when accounting for tuning effort.

Not ideal if you are looking for an automated hyperparameter optimization tool, as this focuses on the statistical analysis of tuning efforts rather than performing the tuning itself.

deep-learning-evaluation model-performance hyperparameter-analysis machine-learning-research statistical-model-comparison
Stale 6m
Maintenance 2 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 6 / 25

How are scores calculated?

Stars

29

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Aug 02, 2025

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nicholaslourie/opda"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.