VITA-Group/Q-GaLore

Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.

40
/ 100
Emerging

This project offers a memory-efficient way to train large language models, like LLaMA-7B, even on GPUs with limited memory (e.g., 16GB). It takes your model configuration and training data, then outputs a fully trained model that required significantly less memory during the training process. This is ideal for machine learning engineers and researchers working on large-scale AI models.

203 stars. No commits in the last 6 months.

Use this if you need to pre-train or fine-tune large language models but are constrained by the memory capacity of your GPUs.

Not ideal if you are working with small models or have access to ample high-memory GPU resources, as the setup might add unnecessary complexity.

large-language-models model-training deep-learning resource-optimization AI-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

203

Forks

19

Language

Python

License

Apache-2.0

Last pushed

Jul 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VITA-Group/Q-GaLore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.