google-research/electra

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

48
/ 100
Emerging

This project helps machine learning engineers or researchers efficiently train custom language understanding models. It takes large amounts of raw text as input and produces a specialized text encoder ready for specific tasks like answering questions, classifying text, or tagging parts of speech. It's designed for those who need to build high-performing natural language processing systems without extensive computational resources.

2,371 stars. No commits in the last 6 months.

Use this if you need to pre-train a language model to understand text patterns for various downstream NLP applications, especially when compute resources are a concern.

Not ideal if you are a business user looking for a ready-to-use application, or if you only need to fine-tune an existing, broadly applicable language model for a simple task.

natural-language-processing machine-learning-engineering text-analysis language-modeling artificial-intelligence-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

2,371

Forks

349

Language

Python

License

Apache-2.0

Last pushed

Mar 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/google-research/electra"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.