axonn-ai/axonn
Parallel framework for training and fine-tuning deep neural networks
This framework helps machine learning engineers and researchers accelerate the training of very large deep neural networks. By efficiently distributing the computational load across multiple processing units, it takes your neural network models and training data and outputs a fully trained, high-performing model much faster. It's designed for those working with extensive datasets and complex models.
Available on PyPI.
Use this if you are a machine learning practitioner struggling with long training times for large deep neural networks and want to leverage parallel computing to speed up the process.
Not ideal if you are working with smaller models or datasets that don't require distributed training, as the overhead might outweigh the benefits.
Stars
72
Forks
9
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 10, 2025
Commits (30d)
0
Dependencies
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/axonn-ai/axonn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
apache/tvm
Open Machine Learning Compiler Framework
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.