btrojan-official/HypeLoRA

HypeLoRA: Hypernetwork-Generated LoRA Adapters for Calibrated Language Model Fine-Tuning

22
/ 100
Experimental

This project helps machine learning practitioners fine-tune large language models like RoBERTa so their predictions are not only accurate but also well-calibrated, meaning the model's confidence in its answers truly reflects the likelihood of being correct. It takes a pre-trained language model and your task-specific data, and outputs a fine-tuned model that provides more reliable confidence scores for its text-based predictions. Data scientists, ML engineers, or researchers working with natural language processing can use this for tasks like sentiment analysis or question answering.

Use this if you need to fine-tune a language model for text classification or understanding and want its predictions to have trustworthy confidence levels, especially when overconfidence could lead to poor decisions.

Not ideal if your primary concern is raw task accuracy at all costs and you are not concerned with the model's confidence calibration, or if you are not working with Transformer-based language models.

natural-language-processing machine-learning-engineering text-classification model-calibration language-model-fine-tuning
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

Python

License

Last pushed

Feb 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/btrojan-official/HypeLoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.