EricLBuehler/xlora

X-LoRA: Mixture of LoRA Experts

39
/ 100
Emerging

This project helps machine learning engineers and researchers combine multiple specialized fine-tuned large language models (LoRA adapters) into a single, more versatile model. You feed it a base language model and several LoRA adapters, and it outputs a new model that intelligently blends the expertise of those adapters for better performance on complex tasks. This is for professionals who work with large language models and want to enhance their capabilities without extensive retraining.

267 stars. No commits in the last 6 months.

Use this if you have multiple LoRA adapters, each good at a specific task, and you want to combine them to handle more nuanced or multi-faceted prompts with a single model.

Not ideal if you are looking for a method to fine-tune a base model from scratch or if you don't already have pre-trained LoRA adapters.

large-language-models model-fine-tuning mixture-of-experts natural-language-processing model-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

267

Forks

21

Language

Python

License

Apache-2.0

Category

llm-fine-tuning

Last pushed

Aug 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/EricLBuehler/xlora"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.