alvarobartt/serving-pytorch-models
Serving PyTorch models with TorchServe :fire:
This project helps machine learning engineers and data scientists deploy PyTorch image classification models for real-time inference. You can take a trained PyTorch model, typically in a .pth file, and set up a robust serving endpoint. This allows your applications to send images and receive classification predictions efficiently.
103 stars. No commits in the last 6 months.
Use this if you need to serve a PyTorch image classification model as an API endpoint for other applications to consume.
Not ideal if you are looking to train a model from scratch or if your primary need is for model serving in a framework other than PyTorch/TorchServe.
Stars
103
Forks
16
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 07, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alvarobartt/serving-pytorch-models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
basetenlabs/truss
The simplest way to serve AI/ML models in production
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution
tensorflow/serving
A flexible, high-performance serving system for machine learning models