doem97/ICLR26_mtLoRA

[ICLR 2026] Official implementation (Claude Agent reproduce supported) of paper "mtLoRA: Scalable Multi-Task Low-Rank Model Adaptation" +2.3% over SOTA with 47% fewer parameters

18
/ 100
Experimental

This project helps AI researchers and machine learning engineers fine-tune large language models for many different tasks more efficiently. It takes a base language model and training data for 15-25+ tasks, then outputs a specialized model that performs better across all tasks with fewer parameters and less training time. Researchers working on advanced multi-task learning for large AI models will find this useful.

Use this if you are trying to adapt large language models to a wide range of tasks and are encountering performance degradation or high computational costs with existing methods.

Not ideal if you are working with a single task or a small number of tasks, as the benefits of this scalable multi-task adaptation approach might not be fully realized.

multi-task learning large language models model adaptation AI research deep learning
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 3 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

Python

License

Last pushed

Mar 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/doem97/ICLR26_mtLoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.