dbaranchuk/memory-efficient-maml

Memory efficient MAML using gradient checkpointing

35
/ 100
Emerging

This project helps machine learning engineers or researchers train complex meta-learning models more efficiently. It takes a PyTorch model and MAML training configuration as input, enabling the execution of many more meta-learning steps without exceeding GPU memory limits. The output is a meta-learned model that can adapt quickly to new, unseen tasks.

No commits in the last 6 months.

Use this if you are a machine learning practitioner working with Model-Agnostic Meta-Learning (MAML) and are running into GPU memory constraints when attempting to use many MAML steps.

Not ideal if you are not using PyTorch for your deep learning models or if your primary bottleneck is not GPU memory during MAML training.

meta-learning deep-learning-training model-adaptation neural-network-optimization resource-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

86

Forks

7

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 30, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dbaranchuk/memory-efficient-maml"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.