期刊论文详细信息
PATTERN RECOGNITION 卷:62
Trajectory aligned features for first person action recognition
Article
Singh, Suriya1  Arora, Chetan2  Jawahar, C. V.1 
[1] IIIT Hyderabad, CVIT, Hyderabad, Telangana, India
[2] IIIT Delhi, Delhi, India
关键词: Action and activity recognition;    Egocentric visiog;    Video indexing and analysis;    Video segmentation;   
DOI  :  10.1016/j.patcog.2016.07.031
来源: Elsevier
PDF
【 摘 要 】

Egocentric videos are characterized by their ability to have the first person view. With the popularity of Google Glass and GoPro, use of egocentric videos is on the rise. With the substantial increase in the number of egocentric videos, the value and utility of recognizing actions of the wearer in such videos has also thus increased. Unstructured movement of the camera due to natural head motion of the wearer causes sharp changes in the visual field of the egocentric camera causing many standard third person action recognition techniques to perform poorly on such videos. Objects present in the scene and hand gestures of the wearer are the most important cues for first person action recognition but are difficult to segment and recognize in an egocentric video. We propose a novel representation of the first person actions derived from feature trajectories. The features are simple to compute using standard point tracking and do not assume segmentation of hand/objects or recognizing object or hand pose unlike in many previous approaches. We train a bag of words classifier with the proposed features and report a performance improvement of more than 11% on publicly available datasets. Although not designed for the particular case, we show that our technique can also recognize wearer's actions when hands or objects are not visible. (C) 2016 Elsevier Ltd. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_patcog_2016_07_031.pdf 955KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次