K-Wu/pytorch-direct_dgl
Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)
This project helps deep learning engineers train large Graph Convolutional Networks (GCNs) when the entire dataset cannot fit into GPU memory. It allows for efficient, on-the-fly data transfer from CPU memory to the GPU, speeding up training for models that process large, scattered graph data. Engineers working with GCNs on massive datasets will find this beneficial.
No commits in the last 6 months.
Use this if you are a deep learning engineer training large Graph Convolutional Networks (GCNs) and frequently encounter performance bottlenecks due to data needing to be loaded from CPU memory during training.
Not ideal if your deep learning models are not GCNs, or if your entire dataset already fits comfortably within your GPU's memory during training.
Stars
44
Forks
4
Language
—
License
—
Category
Last pushed
Jul 01, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/K-Wu/pytorch-direct_dgl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pyg-team/pytorch_geometric
Graph Neural Network Library for PyTorch
a-r-j/graphein
Protein Graph Library
raamana/graynet
Subject-wise networks from structural MRI, both vertex- and voxel-wise features (thickness, GM...
pykale/pykale
Knowledge-Aware machine LEarning (KALE): accessible machine learning from multiple sources for...
dmlc/dgl
Python package built to ease deep learning on graph, on top of existing DL frameworks.