monologg/DistilKoBERT
Distillation of KoBERT from SKTBrain (Lightweight KoBERT)
This project offers a more efficient version of KoBERT, a powerful language model for Korean text. It takes raw Korean text as input and processes it for tasks like sentiment analysis, named entity recognition, or question answering, but with significantly faster performance and smaller resource usage. Anyone working with Korean language data and building applications like chatbots, content analysis tools, or information retrieval systems would find this useful.
198 stars. No commits in the last 6 months.
Use this if you need to process Korean text quickly and efficiently, especially when deploying models on devices with limited computational power or for real-time applications.
Not ideal if you require the absolute highest accuracy for highly complex natural language understanding tasks and have ample computational resources for the full KoBERT model.
Stars
198
Forks
23
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 06, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/monologg/DistilKoBERT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
SKTBrain/KoBERT
Korean BERT pre-trained cased (KoBERT)
monologg/KoELECTRA
Pretrained ELECTRA Model for Korean
monologg/KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
VinAIResearch/PhoBERT
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
KB-AI-Research/KB-ALBERT
KB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델