princeton-pli/STAT
Skill-Targeted Adaptive Training
This project helps improve the performance of large language models (LLMs) by identifying specific skills they are missing and then creating or reweighting training data to target those weaknesses. It takes an existing LLM and a 'skill catalog' as input, and outputs refined training datasets and instructions for fine-tuning the model, leading to better overall performance. This is for researchers or practitioners who are training or fine-tuning LLMs.
Use this if you are developing or deploying large language models and want to boost their performance on specific tasks by strategically improving their 'skill' proficiency.
Not ideal if you are looking for a pre-trained, ready-to-use LLM, or if you don't have the technical expertise to manage model training and data preparation.
Stars
16
Forks
2
Language
Python
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/princeton-pli/STAT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MantisAI/sieves
Plug-and-play document AI with zero-shot models.
xiaoya-li/Instruction-Tuning-Survey
Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`
TencentARC-QQ/TagGPT
TagGPT: Large Language Models are Zero-shot Multimodal Taggers
rafaelpierre/bullet
bullet: A Zero-Shot / Few-Shot Learning, LLM Based, text classification framework
amazon-science/adaptive-in-context-learning
AdaICL: Which Examples to Annotate of In-Context Learning? Towards Effective and Efficient Selection