tensorflow/serving
A flexible, high-performance serving system for machine learning models
This helps bring trained machine learning models to life, allowing them to make predictions for your users or systems. You provide a trained model (like a recommendation engine or an image classifier), and it outputs predictions or classifications. It's used by machine learning engineers or MLOps specialists responsible for deploying and managing models in real-world applications.
6,349 stars.
Use this if you need to reliably deploy, manage, and scale machine learning models to make real-time predictions in a production environment.
Not ideal if you are still in the model training or experimentation phase and don't need to serve predictions to external systems.
Stars
6,349
Forks
2,200
Language
C++
License
Apache-2.0
Category
Last pushed
Dec 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tensorflow/serving"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
basetenlabs/truss
The simplest way to serve AI/ML models in production
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution
labmlai/labml
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱