princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240
This project helps evaluate how well a large language model (LLM) understands factual relationships, like "Paris is the capital of [MASK]", without needing extensive fine-tuning. You provide a list of factual statements with missing information and the tool outputs the model's predictions and a score indicating its factual recall. Researchers and practitioners working with LLMs would use this to gauge a model's inherent knowledge.
168 stars. No commits in the last 6 months.
Use this if you need to quickly assess the factual knowledge embedded within a pre-trained language model using various prompting methods.
Not ideal if you are looking to build or train a new large language model from scratch, or if your goal is general-purpose natural language processing tasks beyond factual recall.
Stars
168
Forks
22
Language
Python
License
MIT
Category
Last pushed
Oct 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/princeton-nlp/OptiPrompt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
VE-FORBRYDERNE/mtj-softtuner
Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab...