aws-samples/aws-lambda-docker-serverless-inference

Serve scikit-learn, XGBoost, TensorFlow, and PyTorch models with AWS Lambda container images support.

44
/ 100
Emerging

This project helps machine learning engineers or MLOps specialists deploy trained machine learning models for prediction without managing servers. You provide a trained model from frameworks like scikit-learn, XGBoost, TensorFlow, or PyTorch, and it outputs a scalable, pay-per-use inference endpoint. This is ideal for those who need to serve predictions from various data types, including text, images, or tabular data.

100 stars. No commits in the last 6 months.

Use this if you need to serve predictions from your machine learning models (e.g., for object detection, sentiment analysis, or classification) and want to minimize infrastructure management and cost, especially for infrequent or bursty usage.

Not ideal if your application requires extremely low latency for every single inference request, as serverless functions can sometimes introduce a slight delay.

machine-learning-deployment model-serving AI-inference MLOps cloud-infrastructure
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

100

Forks

19

Language

Jupyter Notebook

License

MIT-0

Last pushed

Jul 25, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aws-samples/aws-lambda-docker-serverless-inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.