jinda-liu/R-LoRA

This repository contains the source code and related resources for R-LoRA.

14
/ 100
Experimental

R-LoRA helps machine learning engineers improve how Large Language Models (LLMs) perform when trained to handle many different tasks at once. It takes an existing LLM and training data for various tasks, then produces a fine-tuned LLM that is better at capturing the unique requirements of each task. This is for developers and researchers working with multi-task LLM adaptation.

No commits in the last 6 months.

Use this if you are fine-tuning large language models for multiple distinct tasks simultaneously and find that standard LoRA methods aren't performing well enough across all tasks.

Not ideal if you are only fine-tuning an LLM for a single, specific task or if you are not working with LoRA-based fine-tuning.

large-language-models multi-task-learning model-fine-tuning parameter-efficient-fine-tuning deep-learning-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

20

Forks

Language

Python

License

Category

llm-fine-tuning

Last pushed

Feb 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jinda-liu/R-LoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.