uds-lsv/bert-stable-fine-tuning

On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines

43
/ 100
Emerging

This project helps machine learning engineers and NLP researchers improve the reliability of fine-tuning large language models like BERT. It takes your existing fine-tuning setup for BERT, RoBERTa, or ALBERT models and helps you achieve more consistent, less variable task performance. The output is a more stable fine-tuned model, reducing the performance differences caused by random initializations.

138 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher experiencing inconsistent performance when fine-tuning BERT-based models for natural language processing tasks.

Not ideal if you are not working with transformer-based language models or if your primary concern is model performance rather than stability across different training runs.

natural-language-processing machine-learning-engineering deep-learning-research model-fine-tuning language-model-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

138

Forks

21

Language

Python

License

Apache-2.0

Last pushed

Sep 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/uds-lsv/bert-stable-fine-tuning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.