liuqidong07/MOELoRA-peft

[SIGIR'24] The official implementation code of MOELoRA.

41
/ 100
Emerging

This project helps medical professionals or researchers fine-tune large language models (LLMs) for multiple medical tasks simultaneously. It takes existing medical datasets and a ChatGLM-6B model as input, then outputs a specialized model capable of performing various medical information processing tasks more efficiently. This is ideal for those in medical research or clinical support who need to adapt LLMs for specific healthcare applications.

189 stars. No commits in the last 6 months.

Use this if you need to efficiently adapt a large language model like ChatGLM-6B to handle multiple medical natural language processing tasks, such as patient record analysis or medical question answering.

Not ideal if you are not working with medical text data or do not have access to the specified hardware requirements for training large models.

medical-nlp healthcare-ai clinical-research multi-task-learning language-model-specialization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

189

Forks

22

Language

Python

License

MIT

Category

llm-fine-tuning

Last pushed

Jul 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/liuqidong07/MOELoRA-peft"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.