bayartsogt-ya/albert-mongolian
ALBERT trained on Mongolian text corpus
This project provides an AI model specifically designed for understanding and processing the Mongolian language. It takes Mongolian text as input and can categorize it into topics like sports, economics, or health, or fill in missing words. This is useful for data scientists or NLP engineers working with Mongolian language data.
No commits in the last 6 months.
Use this if you need an efficient, pre-trained AI model to perform text classification or analyze masked language patterns within Mongolian text.
Not ideal if your primary need is for a general-purpose text model for languages other than Mongolian, or if you require an exceptionally large model for highly complex, novel NLP tasks.
Stars
18
Forks
3
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jan 10, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bayartsogt-ya/albert-mongolian"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SKTBrain/KoBERT
Korean BERT pre-trained cased (KoBERT)
monologg/KoELECTRA
Pretrained ELECTRA Model for Korean
monologg/KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
VinAIResearch/PhoBERT
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
KB-AI-Research/KB-ALBERT
KB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델