Sarvandani/Machine_learning-deep_learning_11_algorithms-of-regression
sklearn, tensorflow, random-forest, adaboost, decision-tress, polynomial-regression, g-boost, knn, extratrees, svr, ridge, bayesian-ridge
This project helps you understand how different non-linear regression techniques can be applied to predict outcomes when the relationship between your input and output data isn't a straight line. You input a dataset with one independent variable and one dependent variable, and it outputs predictions and visualizations for 11 different non-linear regression models. This is useful for data analysts, scientists, and researchers who need to model complex relationships in their data.
No commits in the last 6 months.
Use this if you need to explore and compare various non-linear regression models to predict a continuous outcome from a single input variable.
Not ideal if your data has multiple input variables or if you need to predict categorical outcomes.
Stars
11
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Sarvandani/Machine_learning-deep_learning_11_algorithms-of-regression"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stabgan/Multiple-Linear-Regression
Implementation of Multiple Linear Regression both in Python and R
SENATOROVAI/Normal-equation-solver-multiple-linear-regression-course
Multiple Linear Regression (MLR) models the linear relationship between a continuous dependent...
SENATOROVAI/Normal-equations-scalar-form-solver-simple-linear-regression-course
The normal equations for simple linear regression are a system of two linear equations used to...
SENATOROVAI/underfitting-overfitting-polynomial-regression-course
Underfitting and overfitting are critical concepts in machine learning, particularly when using...
andrescorrada/IntroductionToAlgebraicEvaluation
A collection of essays and code on algebraic methods to evaluate noisy judges on unlabeled test data.