zwcolin/Domain-Robustness-Prompt-Tuning

Implementation of the report: on the domain robustness of prefix and prompt tuning

26
/ 100
Experimental

This project helps machine learning engineers and researchers evaluate how well language models perform when they are fine-tuned for specific tasks but then applied to new, slightly different types of data. It takes in configurations for language model training and testing, and outputs metrics that show the model's robustness across various data domains. This is for professionals working with natural language processing models.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher who needs to assess the domain robustness of language models after prompt or prefix tuning.

Not ideal if you are looking for a pre-trained, ready-to-use language model for direct application without needing to evaluate its tuning robustness.

natural-language-processing machine-learning-engineering language-model-evaluation model-robustness prompt-tuning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 12 / 25

How are scores calculated?

Stars

20

Forks

3

Language

Python

License

Last pushed

Mar 10, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/zwcolin/Domain-Robustness-Prompt-Tuning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.