gpauloski/kfac-pytorch

Distributed K-FAC preconditioner for PyTorch

58
/ 100
Established

This tool helps machine learning engineers and researchers accelerate the training of deep neural networks, especially when using distributed computing. It takes a PyTorch neural network model and a standard optimizer, then applies an advanced optimization technique called K-FAC to speed up the learning process. The output is a more efficiently trained model, reaching desired performance faster.

Use this if you are training large deep neural networks with PyTorch, particularly in a distributed environment, and want to reduce the time it takes for your models to converge.

Not ideal if you are using a machine learning framework other than PyTorch or if you are not dealing with large-scale deep learning models where optimization speed is a critical concern.

deep-learning-optimization distributed-ml-training neural-network-training pytorch-ecosystem model-acceleration
No Package No Dependents
Maintenance 13 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

95

Forks

25

Language

Python

License

MIT

Last pushed

Mar 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/gpauloski/kfac-pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.