Victorletzelter/LoRA-MCL
Multiple Choice Learning of Low Rank Adapters for Language Modeling
This project helps developers fine-tune large language models to generate more diverse and relevant text, such as multiple plausible sentence completions or varied translations. It takes a pre-trained Hugging Face language model and your task-specific training data as input, and outputs a fine-tuned model capable of producing a range of high-quality textual outputs. This is for machine learning engineers and researchers working on applications like audio/image captioning or machine translation.
Use this if you need your language model to produce a variety of sensible and diverse outputs for a given input, rather than just a single, most probable continuation.
Not ideal if you primarily need a language model that generates only the single most likely or 'correct' next token and diversity is not a key requirement.
Stars
11
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Victorletzelter/LoRA-MCL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
axolotl-ai-cloud/axolotl
Go ahead and axolotl questions
google/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for...
JosefAlbers/PVM
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
iamarunbrahma/finetuned-qlora-falcon7b-medical
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
h2oai/h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning