ymcui/MacBERT
Revisiting Pre-trained Models for Chinese Natural Language Processing (MacBERT)
This project provides pre-trained models specifically designed to better understand and process Chinese text. It takes raw Chinese text and helps improve various natural language processing applications, such as identifying sentiment or answering questions. This is for anyone building or deploying AI systems that need accurate analysis of Chinese language data.
702 stars. No commits in the last 6 months.
Use this if you need to build or enhance applications that interpret and process Chinese text with improved accuracy.
Not ideal if your primary focus is on languages other than Chinese, as this model is specifically optimized for Chinese natural language processing.
Stars
702
Forks
60
Language
—
License
Apache-2.0
Category
Last pushed
Jul 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ymcui/MacBERT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
SKTBrain/KoBERT
Korean BERT pre-trained cased (KoBERT)
monologg/KoELECTRA
Pretrained ELECTRA Model for Korean
monologg/KoBERT-Transformers
KoBERT on 🤗 Huggingface Transformers 🤗 (with Bug Fixed)
VinAIResearch/PhoBERT
PhoBERT: Pre-trained language models for Vietnamese (EMNLP-2020 Findings)
KB-AI-Research/KB-ALBERT
KB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델