ma2za/torch-adapters

Small Library of PyTorch Adaptation modules

30
/ 100
Emerging

This library helps machine learning practitioners fine-tune large pre-trained models more efficiently. It takes an existing PyTorch model and applies various adaptation techniques like LoRA or Prompt Tuning, resulting in a model that is specialized for a new task without requiring extensive re-training or storage. Data scientists, ML engineers, or researchers working with large language or vision models can use this.

No commits in the last 6 months. Available on PyPI.

Use this if you need to adapt large pre-trained PyTorch models to new tasks or datasets without the computational cost and storage requirements of full fine-tuning.

Not ideal if you are building a model from scratch or performing full-scale fine-tuning where parameter efficiency is not a primary concern.

deep-learning model-fine-tuning natural-language-processing computer-vision machine-learning-engineering
Stale 6m
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

MIT

Last pushed

Jul 28, 2023

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ma2za/torch-adapters"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.