teddyoweh/Dimensionality-Reduction-PCA

Dimensionality reduction is basically a process of reducing the amount of random features,attributes variables or in this case called dimensions in a dataset and leaving as much variation in the dataset as possible by obtaining a set of only relevant features to increase the effiency of a model.

13
/ 100
Experimental

This project helps data scientists and machine learning engineers prepare their datasets for model training. It takes a dataset with many attributes or variables and identifies only the most relevant ones. The outcome is a simplified dataset that can lead to faster training, more accurate predictions, and models that are less prone to errors.

No commits in the last 6 months.

Use this if your machine learning models are training slowly, overfitting, or performing poorly due to too many irrelevant data points or features.

Not ideal if you need to retain every single piece of original data for interpretability or if your dataset already has a very small number of features.

data-preparation machine-learning-engineering predictive-modeling model-optimization statistical-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Jupyter Notebook

License

Last pushed

Apr 28, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/teddyoweh/Dimensionality-Reduction-PCA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.