iitzco/tfserve

Serve TF models simple and easy as an HTTP API

49
/ 100
Emerging

This tool helps machine learning engineers and data scientists deploy TensorFlow models as simple HTTP APIs. You provide a trained TensorFlow model (a .pb file or checkpoint directory) and specify the input/output tensor names. It then handles incoming data, passes it through your model, and returns the predictions as an HTTP response.

No commits in the last 6 months. Available on PyPI.

Use this if you need to quickly expose a TensorFlow model for real-time inference via a web service without complex infrastructure.

Not ideal if your model requires multiple graph runs for a single inference or you need advanced model serving features like A/B testing or versioning.

Machine Learning Deployment Model Serving Real-time Inference Deep Learning Operations
Stale 6m
Maintenance 0 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 17 / 25

How are scores calculated?

Stars

36

Forks

10

Language

Python

License

MIT

Last pushed

Oct 29, 2018

Commits (30d)

0

Dependencies

4

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iitzco/tfserve"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.