EternityYW/BiasEval-LLM-MentalHealth

Unveiling and Mitigating Bias in Mental Health Analysis with Large Language Models

32
/ 100
Emerging

This project helps mental health professionals and researchers assess and reduce bias in large language models when analyzing mental health text data. It takes raw text inputs, potentially combined with demographic information, and outputs predictions about mental health conditions along with reasoning, highlighting potential biases related to social factors. The primary users are those working with AI in mental healthcare who need to ensure fair and accurate diagnostic support or research analysis.

No commits in the last 6 months.

Use this if you are developing or evaluating AI tools for mental health analysis and need to understand and address how biases related to social factors (like race, gender, religion) might impact their predictions.

Not ideal if you are looking for a pre-built, production-ready diagnostic tool for direct patient use, as this project focuses on bias evaluation and mitigation in models rather than clinical deployment.

mental-healthcare AI-ethics bias-auditing natural-language-processing clinical-text-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Jun 21, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/EternityYW/BiasEval-LLM-MentalHealth"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.