monologg/KoELECTRA
Pretrained ELECTRA Model for Korean
KoELECTRA provides pre-trained language models specifically designed for understanding Korean text. It takes raw Korean text as input and helps identify meaning, sentiment, or relationships between sentences. This project is ideal for data scientists or researchers who need to analyze and process large volumes of Korean language data efficiently.
630 stars. No commits in the last 6 months.
Use this if you need to build applications that understand and process Korean text, such as for sentiment analysis, named entity recognition, or question answering.
Not ideal if your primary language data is not Korean or if you are looking for a simple, out-of-the-box solution without any programming.
Stars
630
Forks
136
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/monologg/KoELECTRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
SKTBrain/KoBERT
Korean BERT pre-trained cased (KoBERT)
monologg/KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
VinAIResearch/PhoBERT
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
KB-AI-Research/KB-ALBERT
KB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델
ymcui/MacBERT
Revisiting Pre-trained Models for Chinese Natural Language Processing (MacBERT)