gpauloski/kfac-pytorch
Distributed K-FAC preconditioner for PyTorch
This tool helps machine learning engineers and researchers accelerate the training of deep neural networks, especially when using distributed computing. It takes a PyTorch neural network model and a standard optimizer, then applies an advanced optimization technique called K-FAC to speed up the learning process. The output is a more efficiently trained model, reaching desired performance faster.
Use this if you are training large deep neural networks with PyTorch, particularly in a distributed environment, and want to reduce the time it takes for your models to converge.
Not ideal if you are using a machine learning framework other than PyTorch or if you are not dealing with large-scale deep learning models where optimization speed is a critical concern.
Stars
95
Forks
25
Language
Python
License
MIT
Category
Last pushed
Mar 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/gpauloski/kfac-pytorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
nschaetti/EchoTorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on...
metaopt/torchopt
TorchOpt is an efficient library for differentiable optimization built upon PyTorch.
opthub-org/pytorch-bsf
PyTorch implementation of Bezier simplex fitting
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)
stanford-centaur/PyPantograph
A Machine-to-Machine Interaction System for Lean 4.