Sanaelotfi/Bayesian_model_comparison

Supporing code for the paper "Bayesian Model Selection, the Marginal Likelihood, and Generalization".

34
/ 100
Emerging

This project provides experimental code for researchers and machine learning practitioners who are evaluating different statistical models. It helps you understand the nuances of using marginal likelihood for tasks like model selection and hyperparameter tuning, specifically for deep neural networks. By running these experiments, you can analyze various models and gain insights into their generalization capabilities on unseen data.

No commits in the last 6 months.

Use this if you are a machine learning researcher or practitioner needing to rigorously compare different models, select the best architecture, or tune hyperparameters for deep learning models, and want to understand the theoretical and practical implications of using marginal likelihood.

Not ideal if you are looking for a plug-and-play tool for immediate model comparison without delving into the underlying theoretical evaluations or if you are not working with Bayesian methods.

model-comparison hyperparameter-tuning deep-learning-research statistical-modeling generalization-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

36

Forks

4

Language

Jupyter Notebook

License

MIT

Last pushed

Jun 16, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Sanaelotfi/Bayesian_model_comparison"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.