sunyilgdx/Prompts4Keras

Prompt-learning methods used BERT4Keras (PET, EFL and NSP-BERT), both for Chinese and English.

23
/ 100
Experimental

This project helps machine learning engineers and researchers quickly set up and compare prompt-learning methods for text classification, especially in situations where very little data is available (few-shot learning). It takes pre-trained BERT or RoBERTa models and your text datasets (English or Chinese), applying techniques like PET, EFL, and NSP-BERT to produce classified text outputs. This is useful for those working on natural language processing tasks who need to efficiently evaluate different prompt-learning approaches.

No commits in the last 6 months.

Use this if you are an NLP researcher or machine learning engineer focused on text classification with limited data, and you want to experiment with or benchmark prompt-learning methods on both English and Chinese text.

Not ideal if you are looking for a simple, out-of-the-box solution for large-scale text classification tasks without an interest in comparative prompt-learning research, or if you prefer frameworks other than TensorFlow/Keras.

natural-language-processing text-classification few-shot-learning machine-learning-research chinese-nlp
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

30

Forks

Language

Python

License

Apache-2.0

Last pushed

Oct 12, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/sunyilgdx/Prompts4Keras"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.