jpmorganchase/inference-server

Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.

51
/ 100
Established

This tool helps machine learning engineers and data scientists deploy their custom AI/ML models to Amazon SageMaker. It takes your pre-trained model and a Docker container image, then makes your model available for real-time predictions or processing large datasets in batches. This is designed for professionals who build and manage machine learning applications.

No commits in the last 6 months. Available on PyPI.

Use this if you need to deploy a custom AI/ML model, packaged in a Docker container, onto Amazon SageMaker for either immediate predictions or large-scale data processing.

Not ideal if you are looking for a tool to train your AI/ML models or if you prefer to deploy models without using Docker containers.

MLOps Model Deployment Real-time Inference Batch Prediction Cloud Machine Learning
Stale 6m
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

57

Forks

16

Language

Python

License

Apache-2.0

Last pushed

Apr 07, 2025

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/jpmorganchase/inference-server"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.