AISmithLab/HumanStudy-Bench

HumanStudy-Bench: Towards AI Agent Design for Participant Simulation

40
/ 100
Emerging

This tool helps social science researchers and AI ethicists test how well AI models can simulate human participants in psychological experiments. You input an AI model and a published human-subject study, and it outputs rigorous metrics showing if the AI's behavior aligns with real human responses. It's for researchers and practitioners evaluating AI for participant simulation.

Use this if you need to rigorously evaluate the effectiveness of different AI agent designs or large language models in replicating human behavior in social science experiments.

Not ideal if you are looking for a tool to run actual human experiments or to generate synthetic data without comparing against ground truth human behavior.

social-science-research AI-ethics experimental-psychology participant-simulation human-behavior-modeling
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 14 / 25

How are scores calculated?

Stars

12

Forks

3

Language

Python

License

MIT

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/AISmithLab/HumanStudy-Bench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.