K-Wu/pytorch-direct
CodeĀ for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB).The outdated write-up (https://arxiv.org/abs/2101.07956) explains engineering details, but only a portion of the functionality is migrated to this newer PyTorch version 1.8.0nightly (e152ca5).
This project offers a specialized version of PyTorch for researchers and engineers who work with very large graph neural networks. It improves how data is moved to the GPU, making the training process faster and more efficient for complex graph structures. The end user is typically a machine learning engineer or data scientist dealing with large-scale graph data.
No commits in the last 6 months.
Use this if you are training large graph convolutional networks and encountering performance bottlenecks due to data communication with your GPU.
Not ideal if you are working with small to medium-sized graphs or are not encountering GPU data transfer limitations.
Stars
9
Forks
4
Language
C++
License
—
Category
Last pushed
Jun 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/K-Wu/pytorch-direct"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pyg-team/pytorch_geometric
Graph Neural Network Library for PyTorch
a-r-j/graphein
Protein Graph Library
raamana/graynet
Subject-wise networks from structural MRI, both vertex- and voxel-wise features (thickness, GM...
pykale/pykale
Knowledge-Aware machine LEarning (KALE): accessible machine learning from multiple sources for...
dmlc/dgl
Python package built to ease deep learning on graph, on top of existing DL frameworks.