warner-benjamin/optimi
Fast, Modern, and Low Precision PyTorch Optimizers
This helps machine learning engineers and researchers efficiently train deep learning models. By taking a PyTorch model and training inputs, it produces a more performant trained model using less memory and potentially faster training times. The key benefit is achieving accurate training even with lower precision data types like BFloat16, enabling larger models or faster experimentation.
128 stars.
Use this if you are a machine learning engineer or researcher looking to optimize the training of your PyTorch deep learning models, especially to reduce memory usage or speed up training.
Not ideal if you need support for advanced PyTorch optimizer features like compilation, complex numbers, AMSGrad, or Nesterov momentum, or if you are not working with deep learning models.
Stars
128
Forks
4
Language
Python
License
MIT
Category
Last pushed
Dec 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/warner-benjamin/optimi"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)