K-Wu/pytorch-direct_dgl

Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)

26
/ 100
Experimental

This project helps deep learning engineers train large Graph Convolutional Networks (GCNs) when the entire dataset cannot fit into GPU memory. It allows for efficient, on-the-fly data transfer from CPU memory to the GPU, speeding up training for models that process large, scattered graph data. Engineers working with GCNs on massive datasets will find this beneficial.

No commits in the last 6 months.

Use this if you are a deep learning engineer training large Graph Convolutional Networks (GCNs) and frequently encounter performance bottlenecks due to data needing to be loaded from CPU memory during training.

Not ideal if your deep learning models are not GCNs, or if your entire dataset already fits comfortably within your GPU's memory during training.

deep-learning-engineering graph-neural-networks large-scale-model-training gpu-optimization machine-learning-infrastructure
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

44

Forks

4

Language

License

Last pushed

Jul 01, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/K-Wu/pytorch-direct_dgl"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.