BM-K/KoMiniLM
Korean Light Weight Language Model
This project offers efficient, compact Korean language models, ideal for developers building applications where speed and resource usage are critical. It takes Korean text as input and provides language understanding capabilities, enabling tasks like sentiment analysis, named entity recognition, or question answering. This is for software developers who need to integrate Korean natural language processing into their products without the overhead of larger models.
No commits in the last 6 months.
Use this if you are a developer creating applications that require Korean natural language understanding and need to run these models quickly and efficiently on constrained hardware or with high throughput.
Not ideal if you require the absolute highest accuracy for Korean NLP tasks and are not constrained by computational resources or latency.
Stars
31
Forks
2
Language
Python
License
CC-BY-SA-4.0
Category
Last pushed
May 26, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/BM-K/KoMiniLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa