BM-K/KoSimCSE-SKT
Simple Contrastive Learning of Korean Sentence Embeddings
This tool helps improve the accuracy of Korean natural language understanding tasks. It takes Korean text data and trains a model to understand the meaning and relationships between sentences. The output is a highly effective model for tasks like semantic search and text similarity, primarily benefiting data scientists or NLP engineers working with Korean language applications.
No commits in the last 6 months.
Use this if you need to build or enhance applications that require understanding the semantic meaning of Korean sentences, such as search engines or recommendation systems.
Not ideal if your primary need is for languages other than Korean, or if you are looking for a simple, off-the-shelf solution without any model training or fine-tuning.
Stars
53
Forks
8
Language
Python
License
—
Category
Last pushed
Dec 09, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/BM-K/KoSimCSE-SKT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
princeton-nlp/SimCSE
[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
n-waves/multifit
The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model...
yxuansu/SimCTG
[NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation
alibaba-edu/simple-effective-text-matching
Source code of the ACL2019 paper "Simple and Effective Text Matching with Richer Alignment Features".
Shark-NLP/OpenICL
OpenICL is an open-source framework to facilitate research, development, and prototyping of...