cmu-sei/juneberry

Juneberry improves the experience of machine learning experimentation by providing a framework for automating the training, evaluation and comparison of multiple models against multiple datasets, reducing errors and improving reproducibility.

32
/ 100
Emerging

This helps machine learning engineers or researchers manage complex experimentation workflows. You provide your datasets, model definitions, and experiment configurations, and it automates the process of training, evaluating, and comparing multiple models across these datasets. The output includes performance metrics and insights, ensuring more reproducible and error-free results for anyone working with machine learning models.

No commits in the last 6 months.

Use this if you need to systematically compare different machine learning models and datasets to find the best performing solution with high confidence.

Not ideal if you are a data scientist who primarily uses notebooks for quick, exploratory model development and evaluation.

machine-learning-engineering model-evaluation ML-experimentation deep-learning-research algorithm-comparison
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

33

Forks

3

Language

Python

License

Last pushed

Apr 14, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/cmu-sei/juneberry"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.