期刊论文详细信息
IEEE Access
Using Motion History Images With 3D Convolutional Networks in Isolated Sign Language Recognition
Ozge Mercanoglu Sincan1  Hacer Yalim Keles2 
[1] Computer Engineering Department, Ankara University, Ankara, Turkey;Computer Engineering Department, Hacettepe University, Ankara, Turkey;
关键词: 3D-CNN;    attention;    deep learning;    motion history image;    sign language recognition;   
DOI  :  10.1109/ACCESS.2022.3151362
来源: DOAJ
【 摘 要 】

Sign language recognition using computational models is a challenging problem that requires simultaneous spatio-temporal modeling of the multiple sources, i.e. faces, hands, body, etc. In this paper, we propose an isolated sign language recognition model based on a model trained using Motion History Images (MHI) that are generated from RGB video frames. RGB-MHI images represent spatio-temporal summary of each sign video effectively in a single RGB image. We propose two different approaches using this RGB-MHI model. In the first approach, we use the RGB-MHI model as a motion-based spatial attention module integrated into a 3D-CNN architecture. In the second approach, we use RGB-MHI model features directly with the features of a 3D-CNN model using a late fusion technique. We perform extensive experiments on two recently released large-scale isolated sign language datasets, namely AUTSL and BosphorusSign22k. Our experiments show that our models, which use only RGB data, can compete with the state-of-the-art models in the literature that use multi-modal data.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:2次