genandlam/multi-modal-depression-detection

Official codebase for "Context Aware Deep Learning for Multi Modal Depression Detection" [ICASSP 2019, Oral]

21
/ 100
Experimental

This project offers an automated way to screen for depression from clinical interviews. By analyzing the audio and text from interview recordings, it can help mental health professionals or researchers identify signs of depression. It takes recorded interview data (audio and transcripts) and provides a prediction or score indicating the likelihood of depression, streamlining the initial assessment process for those working in mental health.

No commits in the last 6 months.

Use this if you need an automated, data-driven tool to help detect depression from spoken clinical interviews.

Not ideal if you're looking for a diagnostic tool or a solution that uses visual cues (like video) from interviews.

mental-health-screening clinical-interview-analysis depression-assessment psychological-research speech-text-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/genandlam/multi-modal-depression-detection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.