SeldonIO/MLServer

An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more

61
/ 100
Established

MLServer helps machine learning engineers and MLOps professionals deploy their trained machine learning models to production. It takes models saved in various frameworks (like scikit-learn, XGBoost, or Hugging Face) and allows them to be accessed over a network using standard REST or gRPC calls. This enables other applications to send data to the model and receive predictions back efficiently.

875 stars.

Use this if you need to serve multiple machine learning models from different frameworks in a scalable and robust way, especially within Kubernetes environments.

Not ideal if you are a data scientist primarily focused on model training and experimentation, and do not need to deploy models for real-time inference.

model-serving machine-learning-deployment mlops real-time-inference api-development
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

875

Forks

227

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/SeldonIO/MLServer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.