astariul/gibbs
Scale your ML workers asynchronously across processes and machines
This tool helps machine learning engineers efficiently run their Python-based machine learning models. It takes your existing model code and allows it to process many requests simultaneously, even across multiple machines. This is for ML engineers or backend developers who need their models to handle high volumes of predictions or data processing quickly.
No commits in the last 6 months. Available on PyPI.
Use this if you are deploying a machine learning model or any Python function and need it to handle many incoming requests in parallel without delays.
Not ideal if your application doesn't involve running computationally intensive Python code or machine learning models that require scaling.
Stars
13
Forks
—
Language
Python
License
MIT
Category
Last pushed
Apr 01, 2025
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/astariul/gibbs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
deepspeedai/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference...
helmholtz-analytics/heat
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
bsc-wdc/dislib
The Distributed Computing library for python implemented using PyCOMPSs programming model for HPC.