SinclairCoder/Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
Staying current with breakthroughs in training large language models to understand and follow instructions can be a challenge. This resource provides a curated list of research papers on 'instruction-tuning,' a technique that improves a model's ability to perform multiple tasks and generalize to new ones by teaching it to interpret natural language prompts, examples, and constraints. It's designed for AI researchers and practitioners who want to explore the latest advancements in making language models more versatile and human-like in their comprehension.
766 stars. No commits in the last 6 months.
Use this if you are an AI researcher or machine learning engineer looking for a comprehensive reading list on instruction-tuning to enhance language model performance.
Not ideal if you are looking for an implementation guide or code examples for instruction-tuning.
Stars
766
Forks
24
Language
—
License
—
Category
Last pushed
Jul 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SinclairCoder/Instruction-Tuning-Papers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...