tobegit3hub/simple_tensorflow_serving
Generic and easy-to-use serving service for machine learning models
This project helps machine learning engineers and MLOps professionals deploy their trained machine learning models, regardless of the framework used, into a live environment. You input your pre-trained model files, and it creates a web service (API endpoint) that other applications can call to get predictions. This allows your models to be integrated into larger systems, providing real-time intelligence or automated decision-making.
758 stars. No commits in the last 6 months.
Use this if you need to take a machine learning model, created in any popular framework like TensorFlow, PyTorch, or Scikit-learn, and make it accessible as a reliable, scalable web service for other applications to use.
Not ideal if you are a data scientist primarily focused on model training and development, and do not need to deploy models as a live service.
Stars
758
Forks
186
Language
JavaScript
License
Apache-2.0
Category
Last pushed
Mar 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tobegit3hub/simple_tensorflow_serving"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
basetenlabs/truss
The simplest way to serve AI/ML models in production
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution
tensorflow/serving
A flexible, high-performance serving system for machine learning models