0xHadyy/LogisticLearn
Logistic Regression from Scratch - NumPy implementation with L1 and L2 ,cross-validation, Grid-Search, and sklearn benchmarks. Complete math derivations + code
This project offers a deep dive into binary logistic regression, a fundamental machine learning algorithm. It provides a complete, step-by-step implementation in Python using NumPy, covering core model concepts, regularization, and model evaluation techniques. Data scientists and machine learning engineers can use this to understand the underlying mathematics and build logistic regression models from first principles.
Use this if you are a data scientist or machine learning engineer who wants to understand, implement, and fine-tune binary logistic regression models from scratch, rather than just using off-the-shelf libraries.
Not ideal if you primarily need to quickly apply pre-built machine learning models without understanding their internal mechanics, or if you're working with extremely large datasets where pure NumPy might be too slow.
Stars
24
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Oct 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/0xHadyy/LogisticLearn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stabgan/Multiple-Linear-Regression
Implementation of Multiple Linear Regression both in Python and R
SENATOROVAI/Normal-equation-solver-multiple-linear-regression-course
Multiple Linear Regression (MLR) models the linear relationship between a continuous dependent...
SENATOROVAI/Normal-equations-scalar-form-solver-simple-linear-regression-course
The normal equations for simple linear regression are a system of two linear equations used to...
SENATOROVAI/underfitting-overfitting-polynomial-regression-course
Underfitting and overfitting are critical concepts in machine learning, particularly when using...
andrescorrada/IntroductionToAlgebraicEvaluation
A collection of essays and code on algebraic methods to evaluate noisy judges on unlabeled test data.