ymcui/PERT

PERT: Pre-training BERT with Permuted Language Model

41
/ 100
Emerging

PERT helps improve the performance of natural language understanding tasks by providing pre-trained language models for both Chinese and English texts. It takes raw text as input and outputs a more semantically rich representation, which can then be used to enhance various text-based applications like sentiment analysis, question answering, or entity recognition. This is useful for data scientists, NLP engineers, or researchers working on building intelligent text-processing systems.

367 stars. No commits in the last 6 months.

Use this if you need to build or improve AI models that understand Chinese or English text, especially for tasks like question answering, text classification, or identifying entities in text.

Not ideal if your primary goal is to correct word order in text, as its performance varies across different tasks, or if you need to integrate it with non-BERT architectures.

natural-language-processing text-understanding sentiment-analysis question-answering entity-recognition
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

367

Forks

25

Language

License

Apache-2.0

Last pushed

Jul 15, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ymcui/PERT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.