ClaudiaShu/UNA

This is the official code of our Paper "Unsupervised hard Negative Augmentation for contrastive learning"

12
/ 100
Experimental

This project helps machine learning engineers and researchers improve their natural language processing models. It takes a training dataset of text and generates additional 'hard negative' examples using TF-IDF, then outputs an enhanced dataset for training. This process helps create more robust and accurate sentence embeddings for tasks like semantic similarity.

No commits in the last 6 months.

Use this if you are a machine learning engineer working on natural language processing and want to improve the robustness of your sentence embedding models by augmenting your training data with challenging negative examples.

Not ideal if you are looking for a pre-trained model to use directly, as this project focuses on data augmentation for model training.

natural-language-processing machine-learning-engineering data-augmentation text-embeddings model-training
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Jan 18, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ClaudiaShu/UNA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.