VE-FORBRYDERNE/mtj-softtuner
Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance
This tool helps AI practitioners customize the behavior of large language models like GPT-J-6B or GPT-Neo-2.7B without needing to retrain the entire model. You provide a dataset relevant to your specific task, and it generates 'soft prompts' which are small, efficient additions that guide the model's output. This is ideal for AI developers or researchers who want to fine-tune existing large models for specialized text generation tasks.
No commits in the last 6 months.
Use this if you need to quickly adapt a large, pre-trained language model to a specific text generation style or domain using a small amount of your own data, without incurring the high cost of full model fine-tuning.
Not ideal if you are looking for a no-code solution for direct use by non-technical content creators, or if you need to drastically alter the model's fundamental understanding rather than just its output style.
Stars
28
Forks
20
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 01, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/VE-FORBRYDERNE/mtj-softtuner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240