期刊论文详细信息
Journal of NeuroEngineering and Rehabilitation
The use of machine learning and deep learning techniques to assess proprioceptive impairments of the upper limb after stroke
Research
Stephen H. Scott1  Delowar Hossain2  Sean P. Dukelow2  Tyler Cluff3 
[1] Department of Biomedical and Molecular Sciences, Queen’s University, Kingston, ON, Canada;Department of Clinical Neuroscience, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada;Faculty of Kinesiology, University of Calgary, Calgary, AB, Canada;
关键词: Stroke;    Proprioception;    Robotics;    Position sense;    Machine learning;    Deep learning;   
DOI  :  10.1186/s12984-023-01140-9
 received in 2022-04-01, accepted in 2023-01-18,  发布年份 2023
来源: Springer
PDF
【 摘 要 】

BackgroundRobots can generate rich kinematic datasets that have the potential to provide far more insight into impairments than standard clinical ordinal scales. Determining how to define the presence or absence of impairment in individuals using kinematic data, however, can be challenging. Machine learning techniques offer a potential solution to this problem. In the present manuscript we examine proprioception in stroke survivors using a robotic arm position matching task. Proprioception is impaired in 50–60% of stroke survivors and has been associated with poorer motor recovery and longer lengths of hospital stay. We present a simple cut-off score technique for individual kinematic parameters and an overall task score to determine impairment. We then compare the ability of different machine learning (ML) techniques and the above-mentioned task score to correctly classify individuals with or without stroke based on kinematic data.MethodsParticipants performed an Arm Position Matching (APM) task in an exoskeleton robot. The task produced 12 kinematic parameters that quantify multiple attributes of position sense. We first quantified impairment in individual parameters and an overall task score by determining if participants with stroke fell outside of the 95% cut-off score of control (normative) values. Then, we applied five machine learning algorithms (i.e., Logistic Regression, Decision Tree, Random Forest, Random Forest with Hyperparameters Tuning, and Support Vector Machine), and a deep learning algorithm (i.e., Deep Neural Network) to classify individual participants as to whether or not they had a stroke based only on kinematic parameters using a tenfold cross-validation approach.ResultsWe recruited 429 participants with neuroimaging-confirmed stroke (< 35 days post-stroke) and 465 healthy controls. Depending on the APM parameter, we observed that 10.9–48.4% of stroke participants were impaired, while 44% were impaired based on their overall task score. The mean performance metrics of machine learning and deep learning models were: accuracy 82.4%, precision 85.6%, recall 76.5%, and F1 score 80.6%. All machine learning and deep learning models displayed similar classification accuracy; however, the Random Forest model had the highest numerical accuracy (83%). Our models showed higher sensitivity and specificity (AUC = 0.89) in classifying individual participants than the overall task score (AUC = 0.85) based on their performance in the APM task. We also found that variability was the most important feature in classifying performance in the APM task.ConclusionOur ML models displayed similar classification performance. ML models were able to integrate more kinematic information and relationships between variables into decision making and displayed better classification performance than the overall task score. ML may help to provide insight into individual kinematic features that have previously been overlooked with respect to clinical importance.

【 授权许可】

CC BY   
© The Author(s) 2023

【 预 览 】
附件列表
Files Size Format View
RO202305110500806ZK.pdf 1718KB PDF download
41116_2022_35_Article_IEq348.gif 1KB Image download
41116_2022_35_Article_IEq403.gif 1KB Image download
Fig. 4 91KB Image download
41116_2022_35_Article_IEq409.gif 1KB Image download
41116_2022_35_Article_IEq412.gif 1KB Image download
MediaObjects/12888_2022_4513_MOESM1_ESM.docx 258KB Other download
Fig. 6 102KB Image download
41116_2022_35_Article_IEq419.gif 1KB Image download
41116_2022_35_Article_IEq420.gif 1KB Image download
41116_2022_35_Article_IEq421.gif 1KB Image download
12888_2022_4443_Article_IEq5.gif 1KB Image download
12888_2022_4443_Article_IEq6.gif 1KB Image download
12888_2022_4443_Article_IEq7.gif 1KB Image download
41116_2022_35_Article_IEq425.gif 1KB Image download
Fig. 7 106KB Image download
12888_2022_4443_Article_IEq8.gif 1KB Image download
12888_2022_4443_Article_IEq9.gif 1KB Image download
12888_2022_4443_Article_IEq10.gif 1KB Image download
12888_2022_4443_Article_IEq11.gif 1KB Image download
12888_2022_4443_Article_IEq12.gif 1KB Image download
12888_2022_4443_Article_IEq13.gif 1KB Image download
41116_2022_35_Article_IEq433.gif 1KB Image download
【 图 表 】

41116_2022_35_Article_IEq433.gif

12888_2022_4443_Article_IEq13.gif

12888_2022_4443_Article_IEq12.gif

12888_2022_4443_Article_IEq11.gif

12888_2022_4443_Article_IEq10.gif

12888_2022_4443_Article_IEq9.gif

12888_2022_4443_Article_IEq8.gif

Fig. 7

41116_2022_35_Article_IEq425.gif

12888_2022_4443_Article_IEq7.gif

12888_2022_4443_Article_IEq6.gif

12888_2022_4443_Article_IEq5.gif

41116_2022_35_Article_IEq421.gif

41116_2022_35_Article_IEq420.gif

41116_2022_35_Article_IEq419.gif

Fig. 6

41116_2022_35_Article_IEq412.gif

41116_2022_35_Article_IEq409.gif

Fig. 4

41116_2022_35_Article_IEq403.gif

41116_2022_35_Article_IEq348.gif

【 参考文献 】
  • [1]
  • [2]
  • [3]
  • [4]
  • [5]
  • [6]
  • [7]
  • [8]
  • [9]
  • [10]
  • [11]
  • [12]
  • [13]
  • [14]
  • [15]
  • [16]
  • [17]
  • [18]
  • [19]
  • [20]
  • [21]
  • [22]
  • [23]
  • [24]
  • [25]
  • [26]
  • [27]
  • [28]
  • [29]
  • [30]
  • [31]
  • [32]
  • [33]
  • [34]
  • [35]
  • [36]
  • [37]
  • [38]
  • [39]
  • [40]
  • [41]
  • [42]
  • [43]
  • [44]
  • [45]
  • [46]
  • [47]
  • [48]
  • [49]
  • [50]
  • [51]
  • [52]
  • [53]
  • [54]
  • [55]
  • [56]
  • [57]
  • [58]
  • [59]
  • [60]
  • [61]
  • [62]
  • [63]
  • [64]
  • [65]
  • [66]
  • [67]
  • [68]
  • [69]
  • [70]
  • [71]
  • [72]
  • [73]
  • [74]
  • [75]
  • [76]
  • [77]
  • [78]
  • [79]
  • [80]
  • [81]
  • [82]
  • [83]
  • [84]
  • [85]
  • [86]
  • [87]
  • [88]
  • [89]
  • [90]
  • [91]
  • [92]
  • [93]
  • [94]
  • [95]
  • [96]
  • [97]
  • [98]
  • [99]
  • [100]
  • [101]
  • [102]
  • [103]
  • [104]
  • [105]
  • [106]
  • [107]
  • [108]
  • [109]
  • [110]
  • [111]
  • [112]
  • [113]
  • [114]
  • [115]
  • [116]
  • [117]
  • [118]
  • [119]
  • [120]
  • [121]
  • [122]
  • [123]
  • [124]
  • [125]
  • [126]
  • [127]
  • [128]
  • [129]
  • [130]
  • [131]
  文献评价指标  
  下载次数:15次 浏览次数:5次