期刊论文详细信息
Informatics
Multimodal Hand Gesture Classification for the Human–Car Interaction
Andrea D’Eusanio1  Roberto Vezzani1  Guido Borghi1  Rita Cucchiara2  Alessandro Simoni2  Stefano Pini2 
[1] AIRI—Artificial Intelligence Research and Innovation Center, University of Modena and Reggio Emilia, 41125 Modena, Italy;DIEF—Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, 41125 Modena, Italy;
关键词: hand gesture recognition;    natural user interfaces;    depth maps;    infrared images;    computer vision;    deep learning;   
DOI  :  10.3390/informatics7030031
来源: DOAJ
【 摘 要 】

The recent spread of low-cost and high-quality RGB-D and infrared sensors has supported the development of Natural User Interfaces (NUIs) in which the interaction is carried without the use of physical devices such as keyboards and mouse. In this paper, we propose a NUI based on dynamic hand gestures, acquired with RGB, depth and infrared sensors. The system is developed for the challenging automotive context, aiming at reducing the driver’s distraction during the driving activity. Specifically, the proposed framework is based on a multimodal combination of Convolutional Neural Networks whose input is represented by depth and infrared images, achieving a good level of light invariance, a key element in vision-based in-car systems. We test our system on a recent multimodal dataset collected in a realistic automotive setting, placing the sensors in an innovative point of view, i.e., in the tunnel console looking upwards. The dataset consists of a great amount of labelled frames containing 12 dynamic gestures performed by multiple subjects, making it suitable for deep learning-based approaches. In addition, we test the system on a different well-known public dataset, created for the interaction between the driver and the car. Experimental results on both datasets reveal the efficacy and the real-time performance of the proposed method.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:2次