google-research/electra
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
This project helps machine learning engineers or researchers efficiently train custom language understanding models. It takes large amounts of raw text as input and produces a specialized text encoder ready for specific tasks like answering questions, classifying text, or tagging parts of speech. It's designed for those who need to build high-performing natural language processing systems without extensive computational resources.
2,371 stars. No commits in the last 6 months.
Use this if you need to pre-train a language model to understand text patterns for various downstream NLP applications, especially when compute resources are a concern.
Not ideal if you are a business user looking for a ready-to-use application, or if you only need to fine-tune an existing, broadly applicable language model for a simple task.
Stars
2,371
Forks
349
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/google-research/electra"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hangyav/textLSP
Language server for text spell and grammar check with various tools.
ujjax/pred-rnn
PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs
gsurma/text_predictor
Char-level RNN LSTM text generatorđź“„.
oliverguhr/fullstop-deep-punctuation-prediction
A model that predicts the punctuation of English, Italian, French and German texts.
ivanliu1989/SwiftKey-Natural-language
SwiftKey, our corporate partner in this capstone, builds a smart keyboard that makes it easier...