RoyZry98/T-REX-Pytorch

[Arxiv 2025] Official code for T-REX: Mixture-of-Rank-One-Experts with semantic-aware Intuition for Multi-task Large Language Model Finetuning

20
/ 100
Experimental

This project helps machine learning engineers fine-tune large language models (LLMs) to perform multiple tasks more efficiently. By using an advanced technique called Mixture-of-Rank-One-Experts, it takes a base LLM and a multi-task dataset to produce a specialized LLM capable of handling various language-related tasks. This is ideal for ML engineers working on applications requiring a single model to excel at diverse functions like summarization, translation, and question-answering.

No commits in the last 6 months.

Use this if you need to fine-tune a large language model to perform well across multiple distinct natural language processing tasks simultaneously.

Not ideal if you only need to fine-tune a model for a single specific task or if you are not comfortable working with command-line tools and machine learning frameworks.

Large Language Model Fine-tuning Multi-task Learning Natural Language Processing Machine Learning Engineering AI Model Specialization
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 7 / 25
Community 5 / 25

How are scores calculated?

Stars

17

Forks

1

Language

Python

License

Last pushed

May 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RoyZry98/T-REX-Pytorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.