BeyonderXX/tensorflow-serving-tutorial

A tutorial of building tensorflow serving service from scratch

38
/ 100
Emerging

This project guides machine learning practitioners on how to take a trained TensorFlow model and deploy it for live use in a production environment. It shows you how to convert your Python-trained model into a standard format and then set up a high-performance serving system (TensorFlow Serving) to handle predictions. The outcome is a robust, scalable service that can take new data inputs and return predictions efficiently, ideal for a Machine Learning Engineer or Data Scientist.

No commits in the last 6 months.

Use this if you have trained a TensorFlow model in Python and need to make it available as a reliable, high-performance prediction service for other applications, without using Python for the serving component.

Not ideal if you only need a simple, one-off prediction without concerns for performance, versioning, or high availability, or if you prefer to keep your serving logic entirely within a Python environment.

MLOps Model Deployment Production AI Machine Learning Engineering Real-time Inference
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

89

Forks

11

Language

C++

License

Apache-2.0

Last pushed

Jul 05, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BeyonderXX/tensorflow-serving-tutorial"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.