SKTBrain/KoBERT
Korean BERT pre-trained cased (KoBERT)
KoBERT is a tool for anyone working with the Korean language who needs to understand meaning or categorize text. It takes raw Korean text as input and helps identify sentiment, recognize named entities like organizations or product names, or compare sentence meanings. This is ideal for natural language processing specialists, data scientists, or researchers focused on Korean text analysis.
1,407 stars. No commits in the last 6 months.
Use this if you need a highly accurate language model specifically pre-trained on a large volume of Korean text for tasks like sentiment analysis, named entity recognition, or semantic search.
Not ideal if your primary language of focus is not Korean, or if you require a very lightweight solution for simple keyword matching.
Stars
1,407
Forks
380
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SKTBrain/KoBERT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
monologg/KoELECTRA
Pretrained ELECTRA Model for Korean
monologg/KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
VinAIResearch/PhoBERT
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
KB-AI-Research/KB-ALBERT
KB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델
ymcui/MacBERT
Revisiting Pre-trained Models for Chinese Natural Language Processing (MacBERT)