K-Wu/pytorch-direct

CodeĀ for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB).The outdated write-up (https://arxiv.org/abs/2101.07956) explains engineering details, but only a portion of the functionality is migrated to this newer PyTorch version 1.8.0nightly (e152ca5).

36
/ 100
Emerging

This project offers a specialized version of PyTorch for researchers and engineers who work with very large graph neural networks. It improves how data is moved to the GPU, making the training process faster and more efficient for complex graph structures. The end user is typically a machine learning engineer or data scientist dealing with large-scale graph data.

No commits in the last 6 months.

Use this if you are training large graph convolutional networks and encountering performance bottlenecks due to data communication with your GPU.

Not ideal if you are working with small to medium-sized graphs or are not encountering GPU data transfer limitations.

large-scale graphs graph neural networks GPU computing deep learning optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

9

Forks

4

Language

C++

License

Last pushed

Jun 22, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/K-Wu/pytorch-direct"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.