hkust-nlp/deita

Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]

40
/ 100
Emerging

This project helps machine learning engineers efficiently train Large Language Models (LLMs) to follow instructions. It takes raw, uncurated instruction datasets and outputs a smaller, higher-quality subset of that data. This process creates more effective instruction-tuned LLMs while significantly reducing the amount of data and computational resources needed for training.

591 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher focused on developing and fine-tuning LLMs, and you need to improve model performance and efficiency by selecting the most impactful training data.

Not ideal if you are looking for a pre-trained, ready-to-use LLM for general applications without needing to delve into its fine-tuning process or data curation.

LLM fine-tuning data efficiency natural language processing model alignment machine learning research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

591

Forks

35

Language

Python

License

Apache-2.0

Last pushed

Dec 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hkust-nlp/deita"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.