Jiacheng-Zhu-AIML/AsymmetryLoRA

Preprint: Asymmetry in Low-Rank Adapters of Foundation Models

25
/ 100
Experimental

This project helps machine learning engineers or researchers who are fine-tuning large foundation models for specific tasks. It provides a specialized method for configuring Low-Rank Adapters (LoRA), allowing fine-grained control over how the adapter matrices are initialized and updated. By inputting a foundation model and a task, users can experiment with different asymmetric LoRA configurations to potentially improve model performance or efficiency.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher specifically working on fine-tuning large language models or other foundation models and want to explore advanced, asymmetric LoRA configurations.

Not ideal if you are looking for a simple, out-of-the-box solution for general model fine-tuning without needing to delve into the specifics of LoRA adapter matrix initialization.

Large Language Models Model Fine-tuning Natural Language Processing Machine Learning Research Deep Learning Optimization
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

38

Forks

4

Language

Python

License

Last pushed

Feb 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Jiacheng-Zhu-AIML/AsymmetryLoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.