期刊论文详细信息
BMC Research Notes
Can pre-trained convolutional neural networks be directly used as a feature extractor for video-based neonatal sleep and wake classification?
Muhammad Irfan1  Saadullah Farooq Abbasi1  Saeed Akbarzadeh1  Muhammad Awais1  Chen Chen1  Wei Chen2  Bin Yin3  Xi Long4  Laishuan Wang5  Chunmei Lu5  Xinhua Wang6 
[1] Center for Intelligent Medical Electronics, Department of Electronic Engineering, School of Information Science and Technology, Fudan University, 200433, Shanghai, China;Center for Intelligent Medical Electronics, Department of Electronic Engineering, School of Information Science and Technology, Fudan University, 200433, Shanghai, China;China, and Human Phenome Institute Fudan University, Shanghai, China;Connected Care and Personal Health Department, Philips Research, Shanghai, China;Department of Electrical Engineering, Eindhoven University of Technology, Den Dolech 2, 5612 AZ, Eindhoven, The Netherlands;Department of Family Care Solutions, Philips Research, 5656 AE, Eindhoven, The Netherlands;Department of Neonatology, Children’s Hospital of Fudan University, 200032, Shanghai, China;Department of Neurology, Children’s Hospital of Fudan University, 200032, Shanghai, China;
关键词: Convolutional neural networks (CNNs);    Video electroencephalogram (VEEG);    Neonatal sleep;    Sleep and wake classification;    Feature extraction;   
DOI  :  10.1186/s13104-020-05343-4
来源: Springer
PDF
【 摘 要 】

ObjectiveIn this paper, we propose to evaluate the use of pre-trained convolutional neural networks (CNNs) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNNs as a feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNNs, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet.ResultsFrom around 2-h Fluke® video recording of seven neonates, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future work a dedicated neural network trained on neonatal data or a transfer learning approach is required.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202104286633938ZK.pdf 986KB PDF download
  文献评价指标  
  下载次数:8次 浏览次数:1次