SeldonIO/MLServer
An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
MLServer helps machine learning engineers and MLOps professionals deploy their trained machine learning models to production. It takes models saved in various frameworks (like scikit-learn, XGBoost, or Hugging Face) and allows them to be accessed over a network using standard REST or gRPC calls. This enables other applications to send data to the model and receive predictions back efficiently.
875 stars.
Use this if you need to serve multiple machine learning models from different frameworks in a scalable and robust way, especially within Kubernetes environments.
Not ideal if you are a data scientist primarily focused on model training and experimentation, and do not need to deploy models for real-time inference.
Stars
875
Forks
227
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/SeldonIO/MLServer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
feast-dev/feast
The Open Source Feature Store for AI/ML
clearml/clearml-serving
ClearML - Model-Serving Orchestration and Repository Solution
lakehq/sail
LakeSail's computation framework with a mission to unify batch processing, stream processing,...
PaddlePaddle/Serving
A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)
sustainable-computing-io/kepler-model-server
Model Server for Kepler