NVlabs/DoRA
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
This project helps machine learning practitioners fine-tune large language models (LLMs) and diffusion models more efficiently. It takes a pre-trained model and a dataset for a specific task (like commonsense reasoning or image generation) and produces a specialized version of the model that performs better on that task, without requiring extensive computational resources. Data scientists, AI researchers, and machine learning engineers who need to adapt powerful base models for niche applications would find this useful.
942 stars. No commits in the last 6 months.
Use this if you need to fine-tune a large language model or a diffusion model for a specific task and want to improve performance and stability over traditional LoRA methods, especially when working with lower rank adaptations.
Not ideal if you are looking for a method to train models from scratch or if you are working with very small models where the benefits of parameter-efficient fine-tuning are less pronounced.
Stars
942
Forks
63
Language
Python
License
—
Category
Last pushed
Oct 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVlabs/DoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.