ShiZhengyan/PowerfulPromptFT
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"
This project helps machine learning practitioners improve the performance of language models on specific text classification tasks. It takes pre-trained language models and a small amount of task-specific text data, then applies a specialized pre-training technique called Prompt-based Continued Pre-training (PCP). The output is a more accurate fine-tuned language model, ready for tasks like sentiment analysis, question answering, or detecting text similarity. Data scientists, NLP engineers, and researchers who build and deploy text-based AI applications would use this.
No commits in the last 6 months.
Use this if you need to fine-tune a large language model for a specific text classification task and want to achieve higher accuracy, especially with limited labeled data.
Not ideal if you are not working with pre-trained language models or if your primary goal is not improving prompt-based fine-tuning for text classification.
Stars
76
Forks
19
Language
Python
License
MIT
Category
Last pushed
Feb 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ShiZhengyan/PowerfulPromptFT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
OpenDriveLab/DriveLM
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
MILVLG/prophet
Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for...
deepankar27/Prompt_Organizer
Managed Prompt Engineering
mala-lab/NegPrompt
The official implementation of CVPR 24' Paper "Learning Transferable Negative Prompts for...
iamrk04/LLM-Solutions-Playbook
Unlock the potential of AI-driven solutions and delve into the world of Large Language Models....