warner-benjamin/optimi

Fast, Modern, and Low Precision PyTorch Optimizers

39
/ 100
Emerging

This helps machine learning engineers and researchers efficiently train deep learning models. By taking a PyTorch model and training inputs, it produces a more performant trained model using less memory and potentially faster training times. The key benefit is achieving accurate training even with lower precision data types like BFloat16, enabling larger models or faster experimentation.

128 stars.

Use this if you are a machine learning engineer or researcher looking to optimize the training of your PyTorch deep learning models, especially to reduce memory usage or speed up training.

Not ideal if you need support for advanced PyTorch optimizer features like compilation, complex numbers, AMSGrad, or Nesterov momentum, or if you are not working with deep learning models.

deep-learning-training model-optimization resource-management neural-networks computational-efficiency
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

128

Forks

4

Language

Python

License

MIT

Last pushed

Dec 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/warner-benjamin/optimi"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.