OpenEnvision-Lab/ScalingOPT
ScalingOPT [LLM]
This project helps AI researchers and practitioners perform in-depth studies on how different optimization algorithms impact the training of large language models (LLMs). You input raw text datasets and pre-trained model configurations, and it outputs trained LLM checkpoints, performance metrics, and logs, allowing you to compare optimizer effectiveness for various LLM architectures and training scenarios.
Use this if you are an AI researcher or machine learning engineer focused on understanding, comparing, or developing new optimization techniques for large language models.
Not ideal if you are looking for a simple, off-the-shelf solution to fine-tune an existing LLM without needing to deep-dive into optimizer performance.
Stars
7
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/OpenEnvision-Lab/ScalingOPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)