THUDM/P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks

46
/ 100
Emerging

P-tuning v2 helps machine learning engineers and researchers to adapt existing large language models (LLMs) for specific natural language tasks without the extensive computational cost of traditional fine-tuning. It takes a pre-trained transformer model and task-specific text data, then produces a specialized model ready for tasks like question answering or named entity recognition. This is for professionals building customized NLP applications.

2,077 stars. No commits in the last 6 months.

Use this if you need to optimize the performance of smaller or medium-sized pre-trained language models on specific NLP tasks, like sequence tagging or text classification, with limited computational resources.

Not ideal if you require full model fine-tuning for maximum performance on very large models or if you are not comfortable with machine learning development workflows.

Natural Language Processing Machine Learning Engineering Model Adaptation Text Classification Named Entity Recognition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

2,077

Forks

207

Language

Python

License

Apache-2.0

Last pushed

Nov 16, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/THUDM/P-tuning-v2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.