tensorflow/serving

A flexible, high-performance serving system for machine learning models

57
/ 100
Established

This helps bring trained machine learning models to life, allowing them to make predictions for your users or systems. You provide a trained model (like a recommendation engine or an image classifier), and it outputs predictions or classifications. It's used by machine learning engineers or MLOps specialists responsible for deploying and managing models in real-world applications.

6,349 stars.

Use this if you need to reliably deploy, manage, and scale machine learning models to make real-time predictions in a production environment.

Not ideal if you are still in the model training or experimentation phase and don't need to serve predictions to external systems.

model-deployment machine-learning-operations real-time-inference AI-application-serving prediction-api
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

6,349

Forks

2,200

Language

C++

License

Apache-2.0

Last pushed

Dec 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tensorflow/serving"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.