knagrecha/saturn
Saturn accelerates the training of large-scale deep learning models with a novel joint optimization approach.
This system helps machine learning engineers and researchers efficiently train multiple large deep learning models simultaneously, especially during hyperparameter optimization or model selection. You provide your training jobs, and it automatically manages resources and parallelization techniques to speed up the process. The output is significantly faster and more optimized model training results.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher regularly training multiple large deep learning models and need to optimize their training time and resource utilization.
Not ideal if you are only training a single, small deep learning model or are not concerned with optimizing training efficiency across multiple models.
Stars
24
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/knagrecha/saturn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...