mukhal/icl-ensembling
[Me-FoMo ICLR 2023 - Oral] Exploring Demonstration Ensembling for In-context Learning
This project helps machine learning researchers and practitioners evaluate and improve the performance of large language models (LLMs) for specific tasks, especially when only a few examples (demonstrations) are available. You input your dataset and a pre-trained LLM, and it outputs predictions for your task using various 'in-context learning' ensembling strategies. This is ideal for those working on LLM fine-tuning or prompt engineering.
No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer experimenting with in-context learning to achieve better accuracy on specific natural language processing tasks with limited examples.
Not ideal if you are looking for a plug-and-play solution for a business problem and are not comfortable with machine learning experimentation and Python scripting.
Stars
5
Forks
—
Language
Python
License
—
Category
Last pushed
Aug 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mukhal/icl-ensembling"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...