amazon-science/adaptive-in-context-learning
AdaICL: Which Examples to Annotate of In-Context Learning? Towards Effective and Efficient Selection
This project helps machine learning engineers and researchers efficiently train large language models (LLMs) when working with a limited budget for example annotations. It takes a pool of unlabeled data and a pre-trained LLM, and outputs a smaller, highly informative set of labeled examples to maximize the model's learning with fewer resources. It's designed for those who need to improve LLM performance without extensive manual data labeling.
No commits in the last 6 months.
Use this if you are an ML engineer or researcher needing to train or fine-tune large language models but have constraints on how many examples you can afford to manually label.
Not ideal if you already have a large, high-quality labeled dataset or if your primary goal is not related to optimizing annotation budget for LLM training.
Stars
20
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Oct 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/amazon-science/adaptive-in-context-learning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MantisAI/sieves
Plug-and-play document AI with zero-shot models.
xiaoya-li/Instruction-Tuning-Survey
Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`
rafaelpierre/bullet
bullet: A Zero-Shot / Few-Shot Learning, LLM Based, text classification framework
TencentARC-QQ/TagGPT
TagGPT: Large Language Models are Zero-shot Multimodal Taggers
andrewzamai/SLIMER_IT
An Instruction-tuned LLM for zero-shot NER on Italian