monologg/DistilKoBERT

Distillation of KoBERT from SKTBrain (Lightweight KoBERT)

42
/ 100
Emerging

This project offers a more efficient version of KoBERT, a powerful language model for Korean text. It takes raw Korean text as input and processes it for tasks like sentiment analysis, named entity recognition, or question answering, but with significantly faster performance and smaller resource usage. Anyone working with Korean language data and building applications like chatbots, content analysis tools, or information retrieval systems would find this useful.

198 stars. No commits in the last 6 months.

Use this if you need to process Korean text quickly and efficiently, especially when deploying models on devices with limited computational power or for real-time applications.

Not ideal if you require the absolute highest accuracy for highly complex natural language understanding tasks and have ample computational resources for the full KoBERT model.

Korean-NLP text-analysis language-modeling sentiment-analysis named-entity-recognition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

198

Forks

23

Language

Python

License

Apache-2.0

Last pushed

Sep 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/monologg/DistilKoBERT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.