luyug/GradCache

Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint

48
/ 100
Emerging

This project helps machine learning practitioners efficiently train powerful models that understand relationships between complex data, like text or images. It allows you to feed very large batches of data into your training process without needing extremely expensive, high-memory GPU hardware. The input is your prepared datasets and models (like those from Huggingface), and the output is a trained model that performs as if it were trained on much larger, hardware-intensive setups.

429 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are training deep learning models using contrastive learning and are limited by your GPU or TPU memory, preventing you from using large batch sizes.

Not ideal if your training process doesn't involve contrastive learning or if you already have access to abundant high-memory computing resources.

Machine Learning Training Deep Learning Natural Language Processing Computer Vision Information Retrieval
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

429

Forks

27

Language

Python

License

Apache-2.0

Last pushed

Mar 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/luyug/GradCache"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.