ZifanL/TSDS
Implementation of TSDS: Data Selection for Task-Specific Model Finetuning. An optimal-transport framework for selecting domain-specific and task-specific training data to improve LLM finetuning and instruction tuning.
TSDS helps machine learning engineers and researchers improve the performance of large language models (LLMs) for specific tasks. It takes your existing dataset of potential training examples and a smaller set of examples representing your target task, then identifies the most relevant data to finetune your model efficiently. This results in an optimized, smaller training dataset that leads to better model accuracy for your specific use case.
No commits in the last 6 months.
Use this if you need to fine-tune a large language model for a particular application and want to select the most impactful training data from a larger pool.
Not ideal if you are looking for a tool to generate new training data or if you don't work with large language models.
Stars
17
Forks
1
Language
Python
License
MIT
Category
Last pushed
Dec 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ZifanL/TSDS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...