Victorletzelter/LoRA-MCL

Multiple Choice Learning of Low Rank Adapters for Language Modeling

28
/ 100
Experimental

This project helps developers fine-tune large language models to generate more diverse and relevant text, such as multiple plausible sentence completions or varied translations. It takes a pre-trained Hugging Face language model and your task-specific training data as input, and outputs a fine-tuned model capable of producing a range of high-quality textual outputs. This is for machine learning engineers and researchers working on applications like audio/image captioning or machine translation.

Use this if you need your language model to produce a variety of sensible and diverse outputs for a given input, rather than just a single, most probable continuation.

Not ideal if you primarily need a language model that generates only the single most likely or 'correct' next token and diversity is not a key requirement.

Language Model Fine-tuning Diverse Text Generation Audio Captioning Image Captioning Machine Translation
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Python

License

Apache-2.0

Category

llm-fine-tuning

Last pushed

Feb 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Victorletzelter/LoRA-MCL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.