joeljang/negated-prompts-for-llms

[NeurIPS 2022 Workshop] A Case Study with Negated Prompts using T0 (3B, 11B), InstructGPT (350M-175B), GPT-3 (350M - 175B) & OPT (125M - 175B) LMs

18
/ 100
Experimental

This project is for researchers and practitioners working with large language models (LLMs) who need to understand how well these models interpret negative statements or instructions. It helps you assess an LLM's comprehension of negation by providing tools to input prompts containing negated concepts and then analyze the model's responses to see if it correctly understands what *not* to do or say. Anyone evaluating LLM robustness or behavior for specific applications would find this useful.

No commits in the last 6 months.

Use this if you are a researcher or AI practitioner investigating the limitations and capabilities of large language models, particularly their ability to understand and respond correctly to negated instructions or information.

Not ideal if you are looking for a tool to simply generate or refine prompts for LLMs without deeply analyzing their nuanced understanding of negation.

LLM-evaluation NLP-research prompt-engineering AI-robustness model-comprehension
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

24

Forks

1

Language

Python

License

Last pushed

Sep 27, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/joeljang/negated-prompts-for-llms"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.