geoffxy/habitat
🔮 Execution time predictions for deep neural network training iterations across different GPUs.
This tool helps machine learning engineers and researchers estimate how long it will take for a deep neural network to train on a specific GPU. You provide details about your deep learning model and GPU, and it predicts the execution time for each training iteration. This allows you to better plan and optimize your deep learning experiments.
No commits in the last 6 months.
Use this if you need to predict the training speed of deep neural networks on different GPUs to optimize your resource allocation and experiment planning.
Not ideal if you are looking for a simple, out-of-the-box solution that doesn't require compiling from source within a Docker container.
Stars
63
Forks
15
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 26, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/geoffxy/habitat"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.