perceptiveshawty/RankCSE

Implementation of "RankCSE: Unsupervised Sentence Representation Learning via Learning to Rank" (ACL 2023)

37
/ 100
Emerging

This project helps researchers in natural language processing (NLP) to create better text embeddings. It takes large text datasets, like Wikipedia articles, and processes them to generate numerical representations of sentences. These representations can then be used in various downstream NLP tasks to measure how similar different sentences are. This is primarily for NLP researchers and machine learning engineers looking to advance sentence understanding.

No commits in the last 6 months.

Use this if you are an NLP researcher and need to train models to understand semantic similarity between sentences without relying on labeled data for training.

Not ideal if you are not familiar with training deep learning models or do not have access to computational resources for large-scale text processing.

natural-language-processing text-embeddings semantic-similarity unsupervised-learning language-modeling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

48

Forks

7

Language

Python

License

MIT

Last pushed

Mar 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/perceptiveshawty/RankCSE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.