Lingkai-Kong/Calibrated-BERT-Fine-Tuning

Code for Paper: Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data

31
/ 100
Emerging

This project helps machine learning engineers fine-tune language models like BERT more effectively. It takes an existing BERT-based model trained on a specific text dataset and applies calibration techniques. The output is a fine-tuned model that provides more reliable confidence scores when classifying new, potentially different, text data. It's for ML practitioners developing robust natural language processing applications.

No commits in the last 6 months.

Use this if you are developing AI models for text classification and need more trustworthy predictions, especially when your model encounters data slightly different from what it was originally trained on.

Not ideal if you are looking for a pre-trained, ready-to-use text classification model without needing to understand or implement fine-tuning techniques.

natural-language-processing text-classification machine-learning-engineering model-calibration AI-model-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

36

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Nov 16, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Lingkai-Kong/Calibrated-BERT-Fine-Tuning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.