Beomi/KcBERT-Finetune

KcBERT/KcELECTRA Fine Tune Benchmarks code (forked from https://github.com/monologg/KoELECTRA/tree/master/finetune)

33
/ 100
Emerging

This tool helps developers working with Korean natural language by providing a straightforward way to fine-tune existing large language models (LLMs) like KcBERT and KcELECTRA for specific Korean language tasks. You input a pre-trained model and a dataset for a specific task (e.g., sentiment analysis, named entity recognition), and it outputs a fine-tuned model optimized for that task. This is for developers building Korean NLP applications.

No commits in the last 6 months.

Use this if you are a developer looking to quickly fine-tune Korean-specific BERT or ELECTRA models for common NLP tasks like sentiment analysis, natural language inference, or question answering.

Not ideal if you need to train a model from scratch, require multi-GPU support for training, or are working with languages other than Korean.

Korean NLP natural language processing text classification named entity recognition question answering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

47

Forks

10

Language

Python

License

Last pushed

Apr 10, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Beomi/KcBERT-Finetune"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.