LlaMADRS: Prompting Large Language Models for Interview-Based Depression Assessment
This study introduces LlaMADRS, a novel framework leveraging open-source Large Language Models (LLMs) to automate depression severity assessment using the Montgomery-Asberg Depression Rating Scale (MADRS). We employ a zero-shot prompting strategy with carefully designed cues to guide the model in in...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This study introduces LlaMADRS, a novel framework leveraging open-source
Large Language Models (LLMs) to automate depression severity assessment using
the Montgomery-Asberg Depression Rating Scale (MADRS). We employ a zero-shot
prompting strategy with carefully designed cues to guide the model in
interpreting and scoring transcribed clinical interviews. Our approach, tested
on 236 real-world interviews from the Context-Adaptive Multimodal Informatics
(CAMI) dataset, demonstrates strong correlations with clinician assessments.
The Qwen 2.5--72b model achieves near-human level agreement across most MADRS
items, with Intraclass Correlation Coefficients (ICC) closely approaching those
between human raters. We provide a comprehensive analysis of model performance
across different MADRS items, highlighting strengths and current limitations.
Our findings suggest that LLMs, with appropriate prompting, can serve as
efficient tools for mental health assessment, potentially increasing
accessibility in resource-limited settings. However, challenges remain,
particularly in assessing symptoms that rely on non-verbal cues, underscoring
the need for multimodal approaches in future work. |
---|---|
DOI: | 10.48550/arxiv.2501.03624 |