SohamGovande/podplex

🦾💻🌐 distributed training & serverless inference at scale on RunPod

26
/ 100
Experimental

This project helps machine learning engineers and researchers efficiently train large AI models, like large language models, using widely available, smaller GPUs. You input your machine learning model and training data, and the system automatically distributes the training workload across many decentralized cloud GPUs. The output is a fully trained model, ready for deployment, and visualizations of its performance during evaluation.

No commits in the last 6 months.

Use this if you need to train large AI models but want to avoid the high cost and limited availability of top-tier GPUs, preferring to leverage more accessible, smaller GPU instances economically.

Not ideal if your models are small enough to train on a single GPU or if you require direct, low-level control over your distributed training cluster.

machine-learning-engineering deep-learning-training cloud-resource-optimization AI-model-deployment large-language-models
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

19

Forks

3

Language

Jupyter Notebook

License

Last pushed

May 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/SohamGovande/podplex"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.