hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
This project helps machine learning engineers efficiently train Large Language Models (LLMs) to follow instructions. It takes raw, uncurated instruction datasets and outputs a smaller, higher-quality subset of that data. This process creates more effective instruction-tuned LLMs while significantly reducing the amount of data and computational resources needed for training.
591 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher focused on developing and fine-tuning LLMs, and you need to improve model performance and efficiency by selecting the most impactful training data.
Not ideal if you are looking for a pre-trained, ready-to-use LLM for general applications without needing to delve into its fine-tuning process or data curation.
Stars
591
Forks
35
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hkust-nlp/deita"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...
TIGER-AI-Lab/VisualWebInstruct
The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web...