期刊论文详细信息
IEEE Access
Industry 4.0-Oriented Deep Learning Models for Human Activity Recognition
Saeed Mohsen1  Ahmed Elkaseer2  Steffen G. Scholz2 
[1] Department of Electronics and Communications Engineering, Al-Madina Higher Institute for Engineering and Technology, Giza, Egypt;Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany;
关键词: Deep learning;    convolutional neural network (CNN);    long short-term memory (LSTM);    human activity recognition;    Industry 4.0;   
DOI  :  10.1109/ACCESS.2021.3125733
来源: DOAJ
【 摘 要 】

According to the Industry 4.0 vision, humans in a smart factory, should be equipped with formidable and seamless communication capabilities and integrated into a cyber-physical system (CPS) that can be utilized to monitor and recognize human activity via artificial intelligence (e.g., deep learning). Recent advances in the accuracy of deep learning have contributed significantly to solving the human activity recognition issues, but it remains necessary to develop high performance deep learning models that provide greater accuracy. In this paper, three models: long short-term memory (LSTM), convolutional neural network (CNN), and combined CNN-LSTM are proposed for classification of human activities. These models are applied to a dataset collected from 36 persons engaged in 6 classes of activities – downstairs, jogging, sitting, standing, upstairs, and walking. The proposed models are trained using TensorFlow framework with a hyper-parameter tuning method to achieve high accuracy. Experimentally, confusion matrices and receiver operating characteristic (ROC) curves are used to assess the performance of the proposed models. The results illustrate that the hybrid model CNN-LSTM provides a better performance than either LSTM or CNN in the classification of human activities. The CNN-LSTM model provides the best performance, with a testing accuracy of 97.76%, followed by the LSTM with a testing accuracy of 96.61%, while the CNN shows the least testing accuracy of 94.51%. The testing loss rates for the LSTM, CNN, and CNN-LSTM are 0.236, 0.232, and 0.167, respectively, while the precision, recall, $F1$ -Measure, and the area under the ROC curves (AUCS) for the CNN-LSTM are 97.75%, 97.77%, 97.76%, and 100%, respectively.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次