学位论文详细信息
Diagnostic information use to understand brain mechanisms of facial expression categorization
BF Psychology;Q Science (General)
Petro, Lucy S. ; Schyns, Philippe
University:University of Glasgow
Department:School of Psychology
关键词: Facial expressions, diagnostic information, eye movements, EEG, fMRI;   
Others  :  http://theses.gla.ac.uk/2011/1/2010petrophd.pdf
来源: University of Glasgow
PDF
【 摘 要 】

Proficient categorization of facial expressions is crucial for normal social interaction. Neurophysiological, behavioural, event-related potential, lesion and functional neuroimaging techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless process, and the associated arrangement of bilateral networks. These brain areas exhibit consistent and replicable activation patterns, and can be broadly defined to include visual (occipital and temporal), limbic (amygdala) and prefrontal (orbitofrontal) regions. Together, these areas support early perceptual processing, the formation of detailed representations and subsequent recognition of expressive faces. Despite the critical role of facial expressions in social communication and extensive work in this area, it is still not known how the brain decodes nonverbal signals in terms of expression-specific features. For these reasons, this thesis investigates the role of these so-called diagnostic facial features at three significant stages in expression recognition; the spatiotemporal inputs to the visual system, the dynamic integration of features in higher visual (occipitotemporal) areas, and early sensitivity to features in V1.In Chapter 1, the basic emotion categories are presented, along with the brain regions that are activated by these expressions. In line with this, the current cognitive theory of face processing reviews functional and anatomical dissociations within the distributed neural “face network”. Chapter 1 also introduces the way in which we measure and use diagnostic information to derive brain sensitivity to specific facial features, and how this is a useful tool by which to understand spatial and temporal organisation of expression recognition in the brain. In relation to this, hierarchical, bottom-up neural processing is discussed along with high-level, top-down facilitatory mechanisms. Chapter 2 describes an eye-movement study that reveals inputs to the visual system via fixations reflect diagnostic information use. Inputs to the visual system dictate the information distributed to cognitive systems during the seamless and rapid categorization of expressive faces. How we perform eye-movements during this task informs how task-driven and stimulus-driven mechanisms interact to guide the extraction of information supporting recognition. We recorded eye movements of observers who categorized the six basic categories of facial expressions. We use a measure of task-relevant information (diagnosticity) to discuss oculomotor behaviour, with focus on two findings. Firstly, fixated regions reveal expression differences. Secondly, by examining fixation sequences, the intersection of fixations with diagnostic information increases in a sequence of fixations. This suggests a top-down drive to acquire task-relevant information, with different functional roles for first and final fixations. A combination of psychophysical studies of visual recognition together with the EEG (electroencephalogram) signal is used to infer the dynamics of feature extraction and use during the recognition of facial expressions in Chapter 3. The results reveal a process that integrates visual information over about 50 milliseconds prior to the face-sensitive N170 event-related potential, starting at the eye region, and proceeding gradually towards lower regions. The finding that informative features for recognition are not processed simultaneously but in an orderly progression over a short time period is instructive for understanding the processes involved in visual recognition, and in particular the integration of bottom-up and top-down processes.In Chapter 4 we use fMRI to investigate the task-dependent activation to diagnostic features in early visual areas, suggesting top-down mechanisms as V1 traditionally exhibits only simple response properties. Chapter 3 revealed that diagnostic features modulate the temporal dynamics of brain signals in higher visual areas. Within the hierarchical visual system however, it is not known if an early (V1/V2/V3) sensitivity to diagnostic information contributes to categorical facial judgements, conceivably driven by top-down signals triggered in visual processing. Using retinotopic mapping, we reveal task-dependent information extraction within the earliest cortical representation (V1) of two features known to be differentially necessary for face recognition tasks (eyes and mouth). This strategic encoding of face images is beyond typical V1 properties and suggests a top-down influence of task extending down to the earliest retinotopic stages of visual processing.The significance of these data is discussed in the context of the cortical face network and bidirectional processing in the visual system. The visual cognition of facial expression processing is concerned with the interactive processing of bottom-up sensory-driven information and top-down mechanisms to relate visual input to categorical judgements. The three experiments presented in this thesis are summarized in Chapter 5 in relation to how diagnostic features can be used to explore such processing in the human brain leading to proficient facial expression categorization.

【 预 览 】
附件列表
Files Size Format View
Diagnostic information use to understand brain mechanisms of facial expression categorization 11236KB PDF download
  文献评价指标  
  下载次数:13次 浏览次数:26次