Wang-ML-Lab/bayesian-peft

Bayesian Low-Rank Adaptation of LLMs: BLoB [NeurIPS 2024] and TFB [NeurIPS 2025]

45
/ 100
Emerging

This project offers methods for improving the reliability and performance of fine-tuned Large Language Models (LLMs). It takes an existing LLM adapter (like LoRA) and applies advanced Bayesian techniques to improve how well the model predicts new, unseen data and how confident those predictions are. This is useful for AI researchers and practitioners who want to develop more robust and trustworthy LLMs for various applications.

Use this if you are fine-tuning Large Language Models and need to improve their accuracy, calibration, and ability to generalize to new, out-of-distribution data.

Not ideal if you are looking for a basic LLM fine-tuning library or if you don't have a background in machine learning research.

LLM fine-tuning Bayesian deep learning model calibration out-of-distribution generalization AI research
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

35

Forks

5

Language

Python

License

MIT

Last pushed

Feb 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Wang-ML-Lab/bayesian-peft"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.