ebhy/budgetml
Deploy a ML inference service on a budget in less than 10 lines of code.
This project helps machine learning practitioners quickly and affordably get their trained models online and ready to make predictions. You provide your machine learning model, and it sets up a secure, cost-effective API endpoint that can receive data and return predictions. Data scientists, ML engineers, and anyone needing to deploy a model for predictions without a large budget or complex infrastructure setup would use this.
1,345 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to deploy a machine learning model as a live API endpoint quickly and affordably, prioritizing speed and cost-efficiency over a full-scale production MLOps setup.
Not ideal if you require a robust, enterprise-grade machine learning operations (MLOps) framework for complex, large-scale production environments.
Stars
1,345
Forks
64
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/ebhy/budgetml"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
combust/mleap
MLeap: Deploy ML Pipelines to Production
ml-tooling/opyrator
🪄 Turns your machine learning code into microservices with web API, interactive GUI, and more.
jpmorganchase/inference-server
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using...
SocAIty/APIPod
Create web-APIs for long-running tasks. Job based task handling. Get the result with the job id...
tanujjain/deploy-ml-model
Deploying a simple machine learning model to an AWS ec2 instance using flask and docker.