iterative/terraform-provider-iterative
☁️ Terraform plugin for machine learning workloads: spot instance recovery & auto-termination | AWS, GCP, Azure, Kubernetes
This tool helps machine learning engineers and data scientists run their computationally intensive workloads on cloud infrastructure like AWS, Azure, GCP, or Kubernetes without needing deep cloud expertise. You provide your model training script and data, and it automatically provisions, manages, and terminates the necessary computing resources, including GPU instances, across various cloud providers. The output is your trained model, results, or logs, along with significant cost savings.
295 stars. No commits in the last 6 months.
Use this if you need to run machine learning training, simulations, or other batch processing tasks efficiently and cost-effectively on cloud infrastructure, especially utilizing spot instances, and want to avoid cloud vendor lock-in.
Not ideal if you require continuous, long-running services or real-time model serving, as this tool is designed for task-oriented, auto-terminating workloads.
Stars
295
Forks
29
Language
Go
License
Apache-2.0
Category
Last pushed
Dec 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/iterative/terraform-provider-iterative"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aws-controllers-k8s/sagemaker-controller
ACK service controller for Amazon SageMaker
SuperCowPowers/workbench
Workbench: An easy to use Python API for creating and deploying AWS SageMaker Models
aws/aws-step-functions-data-science-sdk-python
Step Functions Data Science SDK for building machine learning (ML) workflows and pipelines on AWS
aws-samples/amazon-sagemaker-mlops-workshop
MLOps workshop with Amazon SageMaker
aws/sagemaker-sparkml-serving-container
This code is used to build & run a Docker container for performing predictions against a Spark...