declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
This project helps AI researchers and practitioners evaluate how well instruction-tuned large language models (LLMs) like Alpaca and Flan-T5 perform on various tasks they haven't seen before. You provide a model, and it outputs quantitative scores across standard academic benchmarks. This is ideal for those comparing different LLMs to understand their generalization capabilities.
552 stars. No commits in the last 6 months.
Use this if you need to quantitatively benchmark and compare the performance of various instruction-tuned large language models on unseen tasks using established academic benchmarks.
Not ideal if you are looking to fine-tune a model or analyze specific qualitative aspects of model outputs beyond standard benchmark metrics.
Stars
552
Forks
44
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/declare-lab/instruct-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...
TIGER-AI-Lab/VisualWebInstruct
The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web...