xiaoya-li/Instruction-Tuning-Survey
Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`
This project helps AI researchers and practitioners stay current with the latest advancements in training Large Language Models (LLMs) to follow human instructions more effectively. It provides a structured collection of research papers and associated projects, detailing various instruction tuning techniques, datasets, and evaluation methods. If you're working on improving LLM performance in following specific commands or adapting them to new tasks, this resource is for you.
230 stars. No commits in the last 6 months.
Use this if you are an AI researcher, machine learning engineer, or data scientist focusing on instruction tuning for Large Language Models and need a comprehensive, up-to-date overview of the field's methodologies and resources.
Not ideal if you are looking for an off-the-shelf software tool for immediate application, as this project serves as a research survey and resource compilation rather than a runnable program.
Stars
230
Forks
29
Language
—
License
Apache-2.0
Category
Last pushed
Aug 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/xiaoya-li/Instruction-Tuning-Survey"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MantisAI/sieves
Plug-and-play document AI with zero-shot models.
rafaelpierre/bullet
bullet: A Zero-Shot / Few-Shot Learning, LLM Based, text classification framework
TencentARC-QQ/TagGPT
TagGPT: Large Language Models are Zero-shot Multimodal Taggers
amazon-science/adaptive-in-context-learning
AdaICL: Which Examples to Annotate of In-Context Learning? Towards Effective and Efficient Selection
andrewzamai/SLIMER_IT
An Instruction-tuned LLM for zero-shot NER on Italian