EricLBuehler/xlora
X-LoRA: Mixture of LoRA Experts
This project helps machine learning engineers and researchers combine multiple specialized fine-tuned large language models (LoRA adapters) into a single, more versatile model. You feed it a base language model and several LoRA adapters, and it outputs a new model that intelligently blends the expertise of those adapters for better performance on complex tasks. This is for professionals who work with large language models and want to enhance their capabilities without extensive retraining.
267 stars. No commits in the last 6 months.
Use this if you have multiple LoRA adapters, each good at a specific task, and you want to combine them to handle more nuanced or multi-faceted prompts with a single model.
Not ideal if you are looking for a method to fine-tune a base model from scratch or if you don't already have pre-trained LoRA adapters.
Stars
267
Forks
21
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/EricLBuehler/xlora"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.