Jiacheng-Zhu-AIML/AsymmetryLoRA
Preprint: Asymmetry in Low-Rank Adapters of Foundation Models
This project helps machine learning engineers or researchers who are fine-tuning large foundation models for specific tasks. It provides a specialized method for configuring Low-Rank Adapters (LoRA), allowing fine-grained control over how the adapter matrices are initialized and updated. By inputting a foundation model and a task, users can experiment with different asymmetric LoRA configurations to potentially improve model performance or efficiency.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher specifically working on fine-tuning large language models or other foundation models and want to explore advanced, asymmetric LoRA configurations.
Not ideal if you are looking for a simple, out-of-the-box solution for general model fine-tuning without needing to delve into the specifics of LoRA adapter matrix initialization.
Stars
38
Forks
4
Language
Python
License
—
Category
Last pushed
Feb 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Jiacheng-Zhu-AIML/AsymmetryLoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training