期刊论文详细信息
Frontiers in Psychology
A multimodal dialog approach to mental state characterization in clinically depressed, anxious, and suicidal populations
Psychology
David Black1  Joshua Cohen1  Allie Haq1  Jennifer Wright-Berryman2  Vanessa Richter3  Michael Neumann3  Vikram Ramanarayanan4 
[1] Clarigent Health, Mason, OH, United States;Department of Social Work, College of Allied Health Sciences, University of Cincinnati, Cincinnati, OH, United States;Modality.AI, Inc., San Francisco, CA, United States;Modality.AI, Inc., San Francisco, CA, United States;Otolaryngology - Head and Neck Surgery (OHNS), University of California, San Francisco, San Francisco, CA, United States;
关键词: machine learning;    multimodal dialog systems;    speech features;    natural language processing;    facial features;    suicide;    depression;    anxiety;   
DOI  :  10.3389/fpsyg.2023.1135469
 received in 2022-12-31, accepted in 2023-08-14,  发布年份 2023
来源: Frontiers
PDF
【 摘 要 】

BackgroundThe rise of depression, anxiety, and suicide rates has led to increased demand for telemedicine-based mental health screening and remote patient monitoring (RPM) solutions to alleviate the burden on, and enhance the efficiency of, mental health practitioners. Multimodal dialog systems (MDS) that conduct on-demand, structured interviews offer a scalable and cost-effective solution to address this need.ObjectiveThis study evaluates the feasibility of a cloud based MDS agent, Tina, for mental state characterization in participants with depression, anxiety, and suicide risk.MethodSixty-eight participants were recruited through an online health registry and completed 73 sessions, with 15 (20.6%), 21 (28.8%), and 26 (35.6%) sessions screening positive for depression, anxiety, and suicide risk, respectively using conventional screening instruments. Participants then interacted with Tina as they completed a structured interview designed to elicit calibrated, open-ended responses regarding the participants' feelings and emotional state. Simultaneously, the platform streamed their speech and video recordings in real-time to a HIPAA-compliant cloud server, to compute speech, language, and facial movement-based biomarkers. After their sessions, participants completed user experience surveys. Machine learning models were developed using extracted features and evaluated with the area under the receiver operating characteristic curve (AUC).ResultsFor both depression and suicide risk, affected individuals tended to have a higher percent pause time, while those positive for anxiety showed reduced lip movement relative to healthy controls. In terms of single-modality classification models, speech features performed best for depression (AUC = 0.64; 95% CI = 0.51–0.78), facial features for anxiety (AUC = 0.57; 95% CI = 0.43–0.71), and text features for suicide risk (AUC = 0.65; 95% CI = 0.52–0.78). Best overall performance was achieved by decision fusion of all models in identifying suicide risk (AUC = 0.76; 95% CI = 0.65–0.87). Participants reported the experience comfortable and shared their feelings.ConclusionMDS is a feasible, useful, effective, and interpretable solution for RPM in real-world clinical depression, anxiety, and suicidal populations. Facial information is more informative for anxiety classification, while speech and language are more discriminative of depression and suicidality markers. In general, combining speech, language, and facial information improved model performance on all classification tasks.

【 授权许可】

Unknown   
Copyright © 2023 Cohen, Richter, Neumann, Black, Haq, Wright-Berryman and Ramanarayanan.

【 预 览 】
附件列表
Files Size Format View
RO202310125728074ZK.pdf 2866KB PDF download
  文献评价指标  
  下载次数:1次 浏览次数:3次