btrojan-official/HypeLoRA
HypeLoRA: Hypernetwork-Generated LoRA Adapters for Calibrated Language Model Fine-Tuning
This project helps machine learning practitioners fine-tune large language models like RoBERTa so their predictions are not only accurate but also well-calibrated, meaning the model's confidence in its answers truly reflects the likelihood of being correct. It takes a pre-trained language model and your task-specific data, and outputs a fine-tuned model that provides more reliable confidence scores for its text-based predictions. Data scientists, ML engineers, or researchers working with natural language processing can use this for tasks like sentiment analysis or question answering.
Use this if you need to fine-tune a language model for text classification or understanding and want its predictions to have trustworthy confidence levels, especially when overconfidence could lead to poor decisions.
Not ideal if your primary concern is raw task accuracy at all costs and you are not concerned with the model's confidence calibration, or if you are not working with Transformer-based language models.
Stars
12
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/btrojan-official/HypeLoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models