astariul/gibbs

Scale your ML workers asynchronously across processes and machines

30
/ 100
Emerging

This tool helps machine learning engineers efficiently run their Python-based machine learning models. It takes your existing model code and allows it to process many requests simultaneously, even across multiple machines. This is for ML engineers or backend developers who need their models to handle high volumes of predictions or data processing quickly.

No commits in the last 6 months. Available on PyPI.

Use this if you are deploying a machine learning model or any Python function and need it to handle many incoming requests in parallel without delays.

Not ideal if your application doesn't involve running computationally intensive Python code or machine learning models that require scaling.

MLOps model-deployment backend-engineering real-time-inference scalable-systems
Stale 6m
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

MIT

Last pushed

Apr 01, 2025

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/astariul/gibbs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.