Lingkai-Kong/Calibrated-BERT-Fine-Tuning
Code for Paper: Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data
This project helps machine learning engineers fine-tune language models like BERT more effectively. It takes an existing BERT-based model trained on a specific text dataset and applies calibration techniques. The output is a fine-tuned model that provides more reliable confidence scores when classifying new, potentially different, text data. It's for ML practitioners developing robust natural language processing applications.
No commits in the last 6 months.
Use this if you are developing AI models for text classification and need more trustworthy predictions, especially when your model encounters data slightly different from what it was originally trained on.
Not ideal if you are looking for a pre-trained, ready-to-use text classification model without needing to understand or implement fine-tuning techniques.
Stars
36
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 16, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Lingkai-Kong/Calibrated-BERT-Fine-Tuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
n-waves/multifit
The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model...
princeton-nlp/SimCSE
[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
yxuansu/SimCTG
[NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation
alibaba-edu/simple-effective-text-matching
Source code of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".
Shark-NLP/OpenICL
OpenICL is an open-source framework to facilitate research, development, and prototyping of...