mourga/contrastive-active-learning

Code for the EMNLP 2021 Paper "Active Learning by Acquiring Contrastive Examples" & the ACL 2022 Paper "On the Importance of Effectively Adapting Pretrained Language Models for Active Learning"

39
/ 100
Emerging

This project helps machine learning practitioners in Natural Language Processing (NLP) efficiently train text classification models by intelligently selecting the most informative data to label. It takes unlabeled text data for tasks like sentiment analysis or topic classification and outputs a high-performing model with less human effort in data annotation. The primary users are ML engineers or researchers working on NLP applications who need to optimize data labeling costs.

128 stars. No commits in the last 6 months.

Use this if you need to train accurate NLP models for tasks like sentiment analysis or topic classification with limited labeled data and want to reduce the cost and time spent on manual data annotation.

Not ideal if you already have abundant labeled data for your NLP task or if your task is outside of text classification.

Natural Language Processing Text Classification Machine Learning Data Labeling Model Training Efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

128

Forks

13

Language

Python

License

GPL-3.0

Last pushed

May 24, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mourga/contrastive-active-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.