aws-samples/amazon-sagemaker-endpoint-deployment-of-fastai-model-with-torchserve

Deploy FastAI Trained PyTorch Model in TorchServe and Host in Amazon SageMaker Inference Endpoint

38
/ 100
Emerging

This project helps machine learning engineers and data scientists deploy FastAI deep learning models for real-time inference. It takes a pre-trained FastAI model and prepares it for efficient, scalable deployment using TorchServe and Amazon SageMaker endpoints. The output is a robust, managed inference service ready to process predictions.

No commits in the last 6 months.

Use this if you have a trained FastAI computer vision model and need to serve predictions at scale without managing complex infrastructure yourself.

Not ideal if you are only experimenting with FastAI models and do not need to deploy them to a production-ready, scalable environment.

deep-learning-deployment real-time-inference computer-vision model-serving cloud-ml-ops
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

75

Forks

9

Language

Jupyter Notebook

License

MIT-0

Last pushed

Jun 19, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aws-samples/amazon-sagemaker-endpoint-deployment-of-fastai-model-with-torchserve"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.