PaddlePaddle/Serving
A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)
This project helps machine learning engineers and MLOps specialists deploy trained AI models into live applications. It takes your pre-trained PaddlePaddle, TensorFlow, or PyTorch models and packages them into a high-performance, scalable service. The output is a robust, always-on AI service ready to integrate with your products, supporting tasks like image classification, object detection, natural language processing, and recommendation systems.
925 stars.
Use this if you need to transform your deep learning models into production-ready, high-performance, and scalable inference services that can handle real-time requests from end-user applications.
Not ideal if you are looking for a tool to train new machine learning models or if your primary need is for local, offline model inference rather than a networked service.
Stars
925
Forks
250
Language
C++
License
Apache-2.0
Category
Last pushed
Feb 20, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/PaddlePaddle/Serving"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
feast-dev/feast
The Open Source Feature Store for AI/ML
clearml/clearml-serving
ClearML - Model-Serving Orchestration and Repository Solution
lakehq/sail
LakeSail's computation framework with a mission to unify batch processing, stream processing,...
SeldonIO/MLServer
An inference server for your machine learning models, including support for multiple frameworks,...
sustainable-computing-io/kepler-model-server
Model Server for Kepler