psychology-of-AI/Personality-Illusion

The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs.

39
/ 100
Emerging

This project helps AI researchers and developers understand how Large Language Models (LLMs) truly behave, compared to what they claim about themselves. You input an LLM and specific prompts related to self-reported personality questionnaires and behavioral tasks. The output is a dataset and analysis revealing the discrepancies between the LLM's stated 'personality' and its actual actions, providing insights into their internal consistency and potential biases. AI ethicists, cognitive scientists studying AI, and LLM developers would find this useful.

100 stars.

Use this if you need to rigorously evaluate the psychological consistency of an LLM's responses, specifically comparing its self-descriptions with its performance on behavioral tests.

Not ideal if you are looking for a tool to develop or fine-tune an LLM's 'personality' directly, or if your focus is on general LLM performance metrics outside of psychological assessment.

AI ethics LLM evaluation computational psychology AI alignment human-AI interaction
No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 9 / 25

How are scores calculated?

Stars

100

Forks

6

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/psychology-of-AI/Personality-Illusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.