hpcaitech/CachedEmbedding

A memory efficient DLRM training solution using ColossalAI

40
/ 100
Emerging

This project helps machine learning engineers and researchers train deep learning recommendation models, especially when dealing with extremely large embedding tables that exceed GPU memory limits. It takes large categorical datasets (like Criteo 1TB) and outputs a trained recommendation model more efficiently by dynamically managing embedding data between CPU and GPU memory. This allows for training models that would otherwise be impossible on a single GPU due as it significantly reduces the required GPU memory.

107 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher training deep learning recommendation models and struggling with out-of-memory errors due to very large embedding tables on your GPUs.

Not ideal if your recommendation model's embedding tables easily fit within your available GPU memory, as it introduces a slight overhead compared to direct GPU-only solutions.

deep-learning recommendation-systems large-scale-ml gpu-optimization model-training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

107

Forks

14

Language

Python

License

Apache-2.0

Last pushed

Nov 22, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/hpcaitech/CachedEmbedding"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.