mukhal/icl-ensembling

[Me-FoMo ICLR 2023 - Oral] Exploring Demonstration Ensembling for In-context Learning

20
/ 100
Experimental

This project helps machine learning researchers and practitioners evaluate and improve the performance of large language models (LLMs) for specific tasks, especially when only a few examples (demonstrations) are available. You input your dataset and a pre-trained LLM, and it outputs predictions for your task using various 'in-context learning' ensembling strategies. This is ideal for those working on LLM fine-tuning or prompt engineering.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer experimenting with in-context learning to achieve better accuracy on specific natural language processing tasks with limited examples.

Not ideal if you are looking for a plug-and-play solution for a business problem and are not comfortable with machine learning experimentation and Python scripting.

natural-language-processing large-language-models few-shot-learning prompt-engineering machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

5

Forks

Language

Python

License

Last pushed

Aug 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mukhal/icl-ensembling"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.