liuqidong07/MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA.
This project helps medical professionals or researchers fine-tune large language models (LLMs) for multiple medical tasks simultaneously. It takes existing medical datasets and a ChatGLM-6B model as input, then outputs a specialized model capable of performing various medical information processing tasks more efficiently. This is ideal for those in medical research or clinical support who need to adapt LLMs for specific healthcare applications.
189 stars. No commits in the last 6 months.
Use this if you need to efficiently adapt a large language model like ChatGLM-6B to handle multiple medical natural language processing tasks, such as patient record analysis or medical question answering.
Not ideal if you are not working with medical text data or do not have access to the specified hardware requirements for training large models.
Stars
189
Forks
22
Language
Python
License
MIT
Category
Last pushed
Jul 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/liuqidong07/MOELoRA-peft"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.