jiangjiechen/uncommongen
Resources for our ACL 2023 paper: "Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge".
This project offers experimental code and resources for researchers examining how Large Language Models (LLMs) process negative commonsense knowledge. It takes a dataset of common sense statements, including both positive and negative assertions, and uses LLMs to generate text or answer true/false questions based on them. Researchers interested in AI ethics, natural language processing, or cognitive science would use this to analyze LLM behavior.
No commits in the last 6 months.
Use this if you are a researcher studying the reliability and biases of Large Language Models, particularly regarding their understanding of what is NOT true.
Not ideal if you are looking for a general-purpose tool to improve LLM outputs or integrate LLMs into an application, as this is a research-specific evaluation framework.
Stars
9
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 11, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jiangjiechen/uncommongen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
filipnaudot/llmSHAP
llmSHAP: a multi-threaded explainability framework using Shapley values for LLM-based outputs.
microsoft/automated-brain-explanations
Generating and validating natural-language explanations for the brain.
CAS-SIAT-XinHai/CPsyCoun
[ACL 2024] CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework...
wesg52/universal-neurons
Universal Neurons in GPT2 Language Models
ICTMCG/LLM-for-misinformation-research
Paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.