jiangjiechen/uncommongen

Resources for our ACL 2023 paper: "Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge".

21
/ 100
Experimental

This project offers experimental code and resources for researchers examining how Large Language Models (LLMs) process negative commonsense knowledge. It takes a dataset of common sense statements, including both positive and negative assertions, and uses LLMs to generate text or answer true/false questions based on them. Researchers interested in AI ethics, natural language processing, or cognitive science would use this to analyze LLM behavior.

No commits in the last 6 months.

Use this if you are a researcher studying the reliability and biases of Large Language Models, particularly regarding their understanding of what is NOT true.

Not ideal if you are looking for a general-purpose tool to improve LLM outputs or integrate LLMs into an application, as this is a research-specific evaluation framework.

AI-ethics NLP-research cognitive-AI language-model-evaluation commonsense-reasoning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

Apache-2.0

Last pushed

Jul 11, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jiangjiechen/uncommongen"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.