brightmart/bert_language_understanding

Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN

43
/ 100
Emerging

This project helps anyone working with text data to quickly and effectively train machine learning models for tasks like document classification. By using a 'pre-train and fine-tune' strategy, you can feed in raw text documents, and the system learns general language understanding from them, then applies that knowledge to your specific labeled dataset for classification. This results in more accurate models with less training time, even with moderately sized datasets.

967 stars. No commits in the last 6 months.

Use this if you need to build a text classification model and want to achieve better performance and faster training without needing massive amounts of labeled data from scratch.

Not ideal if you're looking for a plug-and-play solution for general text generation or advanced conversational AI, as it focuses on classification improvements.

text-classification natural-language-processing document-analysis machine-learning-training information-retrieval
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 25 / 25

How are scores calculated?

Stars

967

Forks

211

Language

Python

License

Last pushed

Jan 01, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/brightmart/bert_language_understanding"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.