AISmithLab/HumanStudy-Bench
HumanStudy-Bench: Towards AI Agent Design for Participant Simulation
This tool helps social science researchers and AI ethicists test how well AI models can simulate human participants in psychological experiments. You input an AI model and a published human-subject study, and it outputs rigorous metrics showing if the AI's behavior aligns with real human responses. It's for researchers and practitioners evaluating AI for participant simulation.
Use this if you need to rigorously evaluate the effectiveness of different AI agent designs or large language models in replicating human behavior in social science experiments.
Not ideal if you are looking for a tool to run actual human experiments or to generate synthetic data without comparing against ground truth human behavior.
Stars
12
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/AISmithLab/HumanStudy-Bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
RouteWorks/RouterArena
RouterArena: An open framework for evaluating LLM routers with standardized datasets, metrics,...