alon-albalak/FLAD

Few-shot Learning with Auxiliary Data

24
/ 100
Experimental

This project helps machine learning practitioners improve the performance of their natural language processing models, especially when they only have a small amount of labeled data for a specific task. By leveraging additional, related datasets (auxiliary data), it allows you to train more generalizable models. It takes text-based auxiliary and target datasets and outputs a fine-tuned language model that performs better on the target task. This is for AI/ML researchers and practitioners working on language models who need to achieve high performance with limited task-specific data.

No commits in the last 6 months.

Use this if you are working with few-shot natural language processing tasks and have access to various auxiliary datasets that could potentially help improve your model's generalization.

Not ideal if you are not working with language models, or if you do not have access to any auxiliary data to supplement your few-shot learning.

natural-language-processing few-shot-learning language-model-fine-tuning text-classification model-generalization
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

31

Forks

3

Language

Python

License

Last pushed

Dec 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/alon-albalak/FLAD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.