amazon-science/adaptive-in-context-learning

AdaICL: Which Examples to Annotate of In-Context Learning? Towards Effective and Efficient Selection

34
/ 100
Emerging

This project helps machine learning engineers and researchers efficiently train large language models (LLMs) when working with a limited budget for example annotations. It takes a pool of unlabeled data and a pre-trained LLM, and outputs a smaller, highly informative set of labeled examples to maximize the model's learning with fewer resources. It's designed for those who need to improve LLM performance without extensive manual data labeling.

No commits in the last 6 months.

Use this if you are an ML engineer or researcher needing to train or fine-tune large language models but have constraints on how many examples you can afford to manually label.

Not ideal if you already have a large, high-quality labeled dataset or if your primary goal is not related to optimizing annotation budget for LLM training.

Machine-Learning-Engineering Natural-Language-Processing Large-Language-Models Active-Learning Data-Annotation-Optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

20

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Oct 30, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/amazon-science/adaptive-in-context-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.