Beomi/KcBERT-Finetune
KcBERT/KcELECTRA Fine Tune Benchmarks code (forked from https://github.com/monologg/KoELECTRA/tree/master/finetune)
This tool helps developers working with Korean natural language by providing a straightforward way to fine-tune existing large language models (LLMs) like KcBERT and KcELECTRA for specific Korean language tasks. You input a pre-trained model and a dataset for a specific task (e.g., sentiment analysis, named entity recognition), and it outputs a fine-tuned model optimized for that task. This is for developers building Korean NLP applications.
No commits in the last 6 months.
Use this if you are a developer looking to quickly fine-tune Korean-specific BERT or ELECTRA models for common NLP tasks like sentiment analysis, natural language inference, or question answering.
Not ideal if you need to train a model from scratch, require multi-GPU support for training, or are working with languages other than Korean.
Stars
47
Forks
10
Language
Python
License
—
Category
Last pushed
Apr 10, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Beomi/KcBERT-Finetune"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Tongjilibo/bert4torch
An elegent pytorch implement of transformers
nyu-mll/jiant
jiant is an nlp toolkit
lonePatient/TorchBlocks
A PyTorch-based toolkit for natural language processing
monologg/JointBERT
Pytorch implementation of JointBERT: "BERT for Joint Intent Classification and Slot Filling"
grammarly/gector
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite"...