学位论文详细信息
Learning embodied models of actions from first person video
First person vision;Egocentric vision;Action recognition;Gaze estimation;Computer vision
Li, Yin ; Rehg, James M. Interactive Computing Essa, Irfan Hays, James Belongie, Serge Grauman, Kristen ; Rehg, James M.
University:Georgia Institute of Technology
Department:Interactive Computing
关键词: First person vision;    Egocentric vision;    Action recognition;    Gaze estimation;    Computer vision;   
Others  :  https://smartech.gatech.edu/bitstream/1853/59207/1/LI-DISSERTATION-2017.pdf
美国|英语
来源: SMARTech Repository
PDF
【 摘 要 】

Advances in sensor miniaturization, low-power computing, and battery life have enabled the first generation of mainstream wearable cameras. Millions of hours of videos are captured by these devices every year, creating a record of our daily visual experiences at an unprecedented scale. This has created a major opportunity to develop new capabilities and products based on computer vision. Meanwhile, computer vision is at a tipping point. Major progress has been made over the last few years in both visual recognition and 3D reconstruction. The stage is set for a grand challenge that can break our field away from narrowly focused benchmarks in favor of “in the wild”, long-term, open world problems in visual analytics and embedded sensing.My dissertation focuses on the automatic analysis of visual data captured from wearable cameras, known as First Person Vision (FPV). My goal is to develop novel embodied representations for first person activity recognition. More specifically, I propose to leverage first person visual cues, including the body motion, hand locations and egocentric gaze for understanding the camera wearer's attention and actions. These cues are naturally ``embodied'' as they derive from the purposive body movements of the person, and capture the concept of action within its context.To this end, I have investigated three important aspects of first person actions. First, I led the effort of developing a new FPV dataset of meal preparation tasks. This dataset establishes by far the largest benchmark for FPV action recognition, gaze estimation and hand segmentation. Second, I present a method to estimate egocentric gaze in the context of actions. My work demonstrates for the first time that egocentric gaze can be reliably estimated using only head motion and hand locations, and without the need for object or action cues. Finally, I develop methods that incorporate first person visual cues for recognizing actions in FPV. My work shows that this embodied representation can significantly improve the accuracy of FPV action recognition.

【 预 览 】
附件列表
Files Size Format View
Learning embodied models of actions from first person video 23829KB PDF download
  文献评价指标  
  下载次数:4次 浏览次数:21次