thomasj02/DeepLearningProjectWorkflow

Machine Learning Workflow, from Andrew Ng's lecture at Deep Learning Summer School 2016

38
/ 100
Emerging

This document outlines a structured approach for improving the performance of a deep learning model. It guides you through analyzing model errors—like the difference between human-level and model performance—to identify whether your model is underfitting or overfitting. This helps machine learning practitioners efficiently diagnose and address common issues in their deep learning projects.

410 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or data scientist struggling to improve your model's accuracy and need a systematic way to debug its performance.

Not ideal if you are new to machine learning and haven't yet built a basic model, as it assumes familiarity with concepts like training and test sets.

machine-learning-workflow model-debugging deep-learning-diagnostics model-optimization error-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 20 / 25

How are scores calculated?

Stars

410

Forks

62

Language

License

Last pushed

Mar 05, 2017

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/thomasj02/DeepLearningProjectWorkflow"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.