pablo-chacon/Spoon-Bending
Educational analysis of LLM alignment, safety behavior, and framing-sensitive response patterns.
This project offers an educational analysis of how Large Language Models (LLMs) like ChatGPT respond to different types of questions, especially when discussing sensitive topics. It reveals how rephrasing or contextualizing a query can lead to different answers from the AI. Researchers and educators in AI ethics or social science can use this to understand and teach about AI alignment and bias.
Use this if you are studying or teaching about AI safety, bias, and how conversational AI guardrails can be influenced by how questions are framed.
Not ideal if you are looking for operational guidance on misusing AI or bypassing safety features for unethical purposes, as this is for educational research only.
Stars
22
Forks
1
Language
—
License
—
Category
Last pushed
Nov 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/pablo-chacon/Spoon-Bending"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/promptbench
A unified evaluation framework for large language models
uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications....
levitation-opensource/Manipulative-Expression-Recognition
MER is a software that identifies and highlights manipulative communication in text from human...
microsoftarchive/promptbench
A unified evaluation framework for large language models
gabe-mousa/Apolien
AI Safety Evaluation Library