sodascience/social_science_inferences_with_llms
Addressing LLM-related measurement error in social science modeling research.
This project helps social scientists improve the trustworthiness of their research when using Large Language Models (LLMs) to collect data on social science topics like personality or political attitudes. It reviews and synthesizes methods to address measurement errors from LLMs, providing a practical framework. Researchers get guidance on how to ensure their models and inferences, often used for understanding societal processes, are valid and reliable.
No commits in the last 6 months.
Use this if you are a social scientist conducting research that uses LLMs to gather data for modeling and need to ensure the accuracy and reliability of your findings.
Not ideal if your research does not involve LLM-generated data or if you are solely interested in traditional questionnaire-based measurement error methods.
Stars
10
Forks
1
Language
—
License
MIT
Category
Last pushed
May 08, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sodascience/social_science_inferences_with_llms"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mlabonne/llm-datasets
Curated list of datasets and tools for post-training.
malteos/llm-datasets
A collection of datasets for language model pretraining including scripts for downloading,...
magpie-align/magpie
[ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your...
jd-coderepos/llms4subjects
The official SemEval 2025 Task 5 - LLMs4Subjects - Shared Task Dataset repository
willxxy/ECG-Bench
A Unified Framework for Benchmarking Generative Electrocardiogram-Language Models (ELMs)