iitzco/tfserve
Serve TF models simple and easy as an HTTP API
This tool helps machine learning engineers and data scientists deploy TensorFlow models as simple HTTP APIs. You provide a trained TensorFlow model (a .pb file or checkpoint directory) and specify the input/output tensor names. It then handles incoming data, passes it through your model, and returns the predictions as an HTTP response.
No commits in the last 6 months. Available on PyPI.
Use this if you need to quickly expose a TensorFlow model for real-time inference via a web service without complex infrastructure.
Not ideal if your model requires multiple graph runs for a single inference or you need advanced model serving features like A/B testing or versioning.
Stars
36
Forks
10
Language
Python
License
MIT
Category
Last pushed
Oct 29, 2018
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iitzco/tfserve"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
basetenlabs/truss
The simplest way to serve AI/ML models in production
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution
tensorflow/serving
A flexible, high-performance serving system for machine learning models