sodascience/social_science_inferences_with_llms

Addressing LLM-related measurement error in social science modeling research.

30
/ 100
Emerging

This project helps social scientists improve the trustworthiness of their research when using Large Language Models (LLMs) to collect data on social science topics like personality or political attitudes. It reviews and synthesizes methods to address measurement errors from LLMs, providing a practical framework. Researchers get guidance on how to ensure their models and inferences, often used for understanding societal processes, are valid and reliable.

No commits in the last 6 months.

Use this if you are a social scientist conducting research that uses LLMs to gather data for modeling and need to ensure the accuracy and reliability of your findings.

Not ideal if your research does not involve LLM-generated data or if you are solely interested in traditional questionnaire-based measurement error methods.

social-science-research survey-methodology quantitative-methods causal-inference measurement-validation
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

License

MIT

Last pushed

May 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sodascience/social_science_inferences_with_llms"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.