OpenEnvision-Lab/ScalingOPT

ScalingOPT [LLM]

25
/ 100
Experimental

This project helps AI researchers and practitioners perform in-depth studies on how different optimization algorithms impact the training of large language models (LLMs). You input raw text datasets and pre-trained model configurations, and it outputs trained LLM checkpoints, performance metrics, and logs, allowing you to compare optimizer effectiveness for various LLM architectures and training scenarios.

Use this if you are an AI researcher or machine learning engineer focused on understanding, comparing, or developing new optimization techniques for large language models.

Not ideal if you are looking for a simple, off-the-shelf solution to fine-tune an existing LLM without needing to deep-dive into optimizer performance.

LLM-training optimizer-research deep-learning-benchmarking neural-network-optimization AI-model-scaling
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

Last pushed

Feb 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/OpenEnvision-Lab/ScalingOPT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.