ShiZhengyan/PowerfulPromptFT

[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"

44
/ 100
Emerging

This project helps machine learning practitioners improve the performance of language models on specific text classification tasks. It takes pre-trained language models and a small amount of task-specific text data, then applies a specialized pre-training technique called Prompt-based Continued Pre-training (PCP). The output is a more accurate fine-tuned language model, ready for tasks like sentiment analysis, question answering, or detecting text similarity. Data scientists, NLP engineers, and researchers who build and deploy text-based AI applications would use this.

No commits in the last 6 months.

Use this if you need to fine-tune a large language model for a specific text classification task and want to achieve higher accuracy, especially with limited labeled data.

Not ideal if you are not working with pre-trained language models or if your primary goal is not improving prompt-based fine-tuning for text classification.

natural-language-processing text-classification language-model-fine-tuning AI-model-training data-science
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

76

Forks

19

Language

Python

License

MIT

Last pushed

Feb 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ShiZhengyan/PowerfulPromptFT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.