KoELECTRA and KoELECTRA-Pipeline

KoELECTRA-Pipeline is a downstream application wrapper that implements the Hugging Face Transformers pipeline interface around the KoELECTRA base model, making them complements rather than alternatives.

KoELECTRA
51
Established
KoELECTRA-Pipeline
37
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 7/25
Maturity 16/25
Community 14/25
Stars: 630
Forks: 136
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 40
Forks: 6
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About KoELECTRA

monologg/KoELECTRA

Pretrained ELECTRA Model for Korean

KoELECTRA provides pre-trained language models specifically designed for understanding Korean text. It takes raw Korean text as input and helps identify meaning, sentiment, or relationships between sentences. This project is ideal for data scientists or researchers who need to analyze and process large volumes of Korean language data efficiently.

Korean-language-processing natural-language-understanding text-analysis machine-learning-research AI-development

About KoELECTRA-Pipeline

monologg/KoELECTRA-Pipeline

Transformers Pipeline with KoELECTRA

This project helps process Korean text by classifying sentiments, identifying named entities, and answering questions. You provide Korean text or a question with context, and it outputs the sentiment (positive/negative), recognized entities like names and organizations, or the answer to your question. Anyone working with Korean language data, such as market researchers, content analysts, or information retrieval specialists, could use this.

Korean-text-analysis sentiment-analysis named-entity-recognition question-answering market-research

Scores updated daily from GitHub, PyPI, and npm data. How scores work