axonn-ai/axonn

Parallel framework for training and fine-tuning deep neural networks

53
/ 100
Established

This framework helps machine learning engineers and researchers accelerate the training of very large deep neural networks. By efficiently distributing the computational load across multiple processing units, it takes your neural network models and training data and outputs a fully trained, high-performing model much faster. It's designed for those working with extensive datasets and complex models.

Available on PyPI.

Use this if you are a machine learning practitioner struggling with long training times for large deep neural networks and want to leverage parallel computing to speed up the process.

Not ideal if you are working with smaller models or datasets that don't require distributed training, as the overhead might outweigh the benefits.

deep-learning-training neural-network-optimization large-scale-ml gpu-accelerated-computing machine-learning-research
Maintenance 6 / 25
Adoption 9 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

72

Forks

9

Language

Python

License

Apache-2.0

Last pushed

Nov 10, 2025

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/axonn-ai/axonn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.