pablo-chacon/Spoon-Bending

Educational analysis of LLM alignment, safety behavior, and framing-sensitive response patterns.

31
/ 100
Emerging

This project offers an educational analysis of how Large Language Models (LLMs) like ChatGPT respond to different types of questions, especially when discussing sensitive topics. It reveals how rephrasing or contextualizing a query can lead to different answers from the AI. Researchers and educators in AI ethics or social science can use this to understand and teach about AI alignment and bias.

Use this if you are studying or teaching about AI safety, bias, and how conversational AI guardrails can be influenced by how questions are framed.

Not ideal if you are looking for operational guidance on misusing AI or bypassing safety features for unethical purposes, as this is for educational research only.

AI Ethics LLM Alignment AI Bias Computational Social Science AI Education
No Package No Dependents
Maintenance 6 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 4 / 25

How are scores calculated?

Stars

22

Forks

1

Language

License

Last pushed

Nov 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/pablo-chacon/Spoon-Bending"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.