JesuFemi-O/Cruise
Easily Deploy your Tensorflow models to Heroku with just the click of a button!
This project helps machine learning engineers or data scientists quickly deploy their trained TensorFlow models to a web server (Heroku) as a REST API. You provide a saved TensorFlow model in a `.tar.gz` file from an AWS S3 bucket, and it outputs a live API endpoint ready to receive prediction requests. This is for professionals who want to make their machine learning models accessible for integration into applications without deep server setup.
No commits in the last 6 months.
Use this if you need to rapidly turn a trained TensorFlow model into a live, accessible prediction service without managing complex server infrastructure.
Not ideal if you require advanced custom server configurations, extremely low-latency serving for high-volume traffic beyond Heroku's scale, or prefer a different cloud provider.
Stars
9
Forks
7
Language
Shell
License
—
Category
Last pushed
Sep 01, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/JesuFemi-O/Cruise"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
combust/mleap
MLeap: Deploy ML Pipelines to Production
ml-tooling/opyrator
🪄 Turns your machine learning code into microservices with web API, interactive GUI, and more.
jpmorganchase/inference-server
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using...
ebhy/budgetml
Deploy a ML inference service on a budget in less than 10 lines of code.
SocAIty/APIPod
Create web-APIs for long-running tasks. Job based task handling. Get the result with the job id...