juzhengz/LoRI

[COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

32
/ 100
Emerging

This tool helps AI developers fine-tune large language models (LLMs) more efficiently when working with multiple tasks simultaneously. It takes a base LLM (like LLaMA-3-8B or Mistral-7B) and task-specific datasets, then produces specialized "adapters" that allow the model to perform well on tasks like code generation, mathematical reasoning, or safety alignment without tasks interfering with each other. AI/ML engineers and researchers who build and deploy LLMs for diverse applications would use this.

171 stars. No commits in the last 6 months.

Use this if you need to fine-tune a large language model for several distinct tasks and want to avoid performance degradation or "forgetting" on one task when training for another.

Not ideal if you are working with single-task fine-tuning or do not have the technical expertise in deep learning model development.

LLM fine-tuning multi-task learning natural language processing AI model development machine learning research
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

171

Forks

14

Language

Python

License

Category

llm-fine-tuning

Last pushed

Jul 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/juzhengz/LoRI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.