nicholaslourie/opda
Design and analyze optimal deep learning models.
This tool helps machine learning engineers and researchers rigorously evaluate the true performance of their deep learning models. By analyzing model performance across various hyperparameter tuning efforts, it shows whether changes genuinely improve outcomes, what data or existing hyperparameters a new hyperparameter interacts with, and the best possible score a model can achieve. You input results from random hyperparameter searches and get out statistical analyses and visualizations of tuning curves, complete with confidence bands.
No commits in the last 6 months. Available on PyPI.
Use this if you need to statistically determine if a change to your deep learning model or its hyperparameters actually improves performance, especially when accounting for tuning effort.
Not ideal if you are looking for an automated hyperparameter optimization tool, as this focuses on the statistical analysis of tuning efforts rather than performing the tuning itself.
Stars
29
Forks
2
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Aug 02, 2025
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nicholaslourie/opda"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SimonBlanke/Gradient-Free-Optimizers
Lightweight optimization with local, global, population-based and sequential techniques across...
Gurobi/gurobi-machinelearning
Formulate trained predictors in Gurobi models
emdgroup/baybe
Bayesian Optimization and Design of Experiments
heal-research/pyoperon
Python bindings and scikit-learn interface for the Operon library for symbolic regression.
simon-hirsch/ondil
A package for online distributional learning.