mlsw/partial-embedding-matrix-adaptation

Vocabulary-level memory efficiency for language model fine-tuning.

20
/ 100
Experimental

This tool helps machine learning engineers and researchers fine-tune large language models more efficiently. It takes an existing pre-trained language model and your specific dataset, then outputs a fine-tuned model with a significantly reduced memory footprint, without sacrificing performance. This is for professionals working with deep learning and natural language processing.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher fine-tuning a language model and are encountering memory limitations on your hardware.

Not ideal if you are not working with language models or are not concerned with memory usage during fine-tuning.

language-model-fine-tuning deep-learning-optimization natural-language-processing model-memory-reduction machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

MIT

Last pushed

Mar 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mlsw/partial-embedding-matrix-adaptation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.