unslothai/hyperlearn
2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.
This project helps data scientists, machine learning engineers, and researchers analyze large datasets faster and with less computing power. It takes your existing data, applies common machine learning algorithms like linear regression or SVD, and delivers results significantly quicker and using less memory than standard tools. It's designed for anyone working with big data who needs to accelerate their model training and analysis.
2,406 stars. No commits in the last 6 months.
Use this if you are a data scientist or researcher struggling with slow machine learning algorithms or running out of memory when processing large datasets.
Not ideal if you are a beginner just learning machine learning and primarily working with small datasets where performance isn't a critical concern.
Stars
2,406
Forks
153
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Nov 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/unslothai/hyperlearn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.