cmu-sei/juneberry
Juneberry improves the experience of machine learning experimentation by providing a framework for automating the training, evaluation and comparison of multiple models against multiple datasets, reducing errors and improving reproducibility.
This helps machine learning engineers or researchers manage complex experimentation workflows. You provide your datasets, model definitions, and experiment configurations, and it automates the process of training, evaluating, and comparing multiple models across these datasets. The output includes performance metrics and insights, ensuring more reproducible and error-free results for anyone working with machine learning models.
No commits in the last 6 months.
Use this if you need to systematically compare different machine learning models and datasets to find the best performing solution with high confidence.
Not ideal if you are a data scientist who primarily uses notebooks for quick, exploratory model development and evaluation.
Stars
33
Forks
3
Language
Python
License
—
Category
Last pushed
Apr 14, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cmu-sei/juneberry"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
optuna/optuna
A hyperparameter optimization framework
keras-team/keras-tuner
A Hyperparameter Tuning Library for Keras
KernelTuner/kernel_tuner
Kernel Tuner
syne-tune/syne-tune
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
deephyper/deephyper
DeepHyper: A Python Package for Massively Parallel Hyperparameter Optimization in Machine Learning