Improving Human-Robot Communication with Mixed-Initiative and Context-Awareness 2009. | |
Measuring Gaze Orientation for Human-Robot Interaction | |
计算机科学;机械制造 | |
R. Brochard ; B. Burger ; A. Herbuloty ; F. Lerasley | |
Others : http://ceur-ws.org/Vol-693/paper2.pdf PID : 41342 |
|
学科分类:计算机科学(综合) | |
来源: CEUR | |
【 摘 要 】
In the context of Human-Robot interaction estimating gaze orientation brings useful information about human focus of attention. This is a contextual information : when you point something you usually look at it. Estimating gaze orientation requires head pose estimation. There are several techniques to estimate head pose from images, they are mainly based on training [3, 4] or on local face features tracking [6]. The approach described here is based on local face features tracking in image space using online learning, it is a mixed approach since we track face features using some learning at feature level. It uses SURF features [2] to guide detection and tracking. Such key features can be matched between images, used for object detection or object tracking [10]. Several ap- proaches work on fixed size images like training techniques which mainly work on low resolution images because of computation costs whereas approaches based on local features tracking work on high resolution images. Tracking face features such as eyes, nose and mouth is a common problem in many applications such as detection of facial expression or video conferencing [8] but most of those appli- cations focus on front face images [9]. We developed an algorithm based on face features tracking using a parametric model. First we need face detection, then we detect face features in following order: eyes, mouth, nose. In order to achieve full profile detection we use sets of SURF to learn what eyes, mouth and nose look like once tracking is initialized. Once those sets of SURF are known they are used to detect and track face features. SURF have a descriptor which is often used to identify a key point and here we add some global geometry information by using the relative position between key points. Then we use a particle filter to track face features using those SURF based detectors, we compute the head pose angles from features position and pass the results through a median filter. This paper is organized as follows. Section 2 describes our modeling of visual features, section 3 presents our tracking implementation. Section 4 presents results we get with our implementation and future works in section 5.
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
Measuring Gaze Orientation for Human-Robot Interaction | 935KB | download |