HKUNLP/icl-ceil

[ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.

37
/ 100
Emerging

This project helps machine learning engineers and researchers improve the performance of large language models (LLMs) for specific tasks by carefully selecting example inputs and outputs (called 'in-context examples'). It takes a dataset of tasks and their solutions, then outputs an optimized set of examples that, when fed to an LLM alongside a new problem, leads to more accurate results. This is for professionals who work with LLMs and want to fine-tune their behavior without retraining the entire model.

103 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to enhance the accuracy of a frozen large language model on specific tasks by providing it with optimal in-context examples.

Not ideal if you are a non-technical user or if you need to perform full model fine-tuning or training rather than just example selection for in-context learning.

natural-language-processing large-language-models in-context-learning machine-learning-research model-prompting
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

103

Forks

11

Language

Python

License

Apache-2.0

Last pushed

Mar 15, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HKUNLP/icl-ceil"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.