RunxinXu/ChildTuning
Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》
This project helps machine learning engineers and researchers fine-tune large language models more effectively. It takes an existing pre-trained language model and a specific dataset for a downstream task, producing a fine-tuned model that performs better and generalizes well to new, unseen data. This is ideal for those working on natural language processing applications.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking to improve the performance and generalizability of your fine-tuned large language models for various NLP tasks.
Not ideal if you are a practitioner without a deep understanding of machine learning model training and fine-tuning, as this is a technical tool for developers.
Stars
62
Forks
8
Language
Python
License
—
Category
Last pushed
Nov 06, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/RunxinXu/ChildTuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
uds-lsv/bert-stable-fine-tuning
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
MeryylleA/lunariscodex
A high-performance PyTorch toolkit for pre-training modern, Llama-style language models. Based...
VanekPetr/flan-t5-text-classifier
Fine-tuning of Flan-5T LLM for text classification 🤖 focuses on adapting a state-of-the-art...
kingTLE/literary-alpaca2
从词表到微调这就是你所需的一切
YuweiYin/HLT-MT
[IJCAI-ECAI 2022] HLT-MT: High-resource Language-specific Training for Multilingual Neural...