NVlabs/DoRA

[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation

41
/ 100
Emerging

This project helps machine learning practitioners fine-tune large language models (LLMs) and diffusion models more efficiently. It takes a pre-trained model and a dataset for a specific task (like commonsense reasoning or image generation) and produces a specialized version of the model that performs better on that task, without requiring extensive computational resources. Data scientists, AI researchers, and machine learning engineers who need to adapt powerful base models for niche applications would find this useful.

942 stars. No commits in the last 6 months.

Use this if you need to fine-tune a large language model or a diffusion model for a specific task and want to improve performance and stability over traditional LoRA methods, especially when working with lower rank adaptations.

Not ideal if you are looking for a method to train models from scratch or if you are working with very small models where the benefits of parameter-efficient fine-tuning are less pronounced.

large-language-models fine-tuning image-generation natural-language-processing model-adaptation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

942

Forks

63

Language

Python

License

Category

llm-fine-tuning

Last pushed

Oct 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVlabs/DoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.