lyeoni/pretraining-for-language-understanding

Pre-training of Language Models for Language Understanding

41
/ 100
Emerging

This project helps natural language processing (NLP) practitioners prepare custom language models for various tasks. It takes a large collection of text, like Wikipedia articles, processes it into a usable format, and then trains a language model. The output is a pre-trained language model that can be used as a component in more complex applications that need to understand human language.

No commits in the last 6 months.

Use this if you need to train a foundational language model from scratch on a specific large text corpus for downstream NLP applications.

Not ideal if you are looking for a pre-trained, ready-to-use language model or a tool for fine-tuning an existing model.

Natural Language Processing Text Analytics Machine Learning Engineering Data Science Content Understanding
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

83

Forks

14

Language

Python

License

Apache-2.0

Last pushed

Aug 24, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/lyeoni/pretraining-for-language-understanding"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.