Fusion of information from multiple modalities in Human Computer Interfaces(HCI) has gained a lot of attention in recent years, and has far reachingimplications in many areas of human-machine interaction. However, a majorlimitation of current HCI fusion systems is that the fusion process tends toignore the semantic nature of modalities, which may reinforce, complement orcontradict each other over time. Also, most systems are not robust inrepresenting the ambiguity inherent in human gestures. In this work, weinvestigate an evidential reasoning based approach for intelligent multimodalfusion, and apply this algorithm to a proposed multimodal system consisting ofa Hand Gesture sensor and a Brain Computing Interface (BCI). There are threemajor contributions of this work to the area of human computer interaction.First, we propose an algorithm for reconstruction of the 3D hand pose given a2D input video. Second, we develop a BCI using Steady State Visually EvokedPotentials, and show how a multimodal system consisting of the two sensors canimprove the efficiency and the complexity of the system, while retaining thesame levels of accuracy. Finally, we propose an semantic fusion algorithm basedon Transferable Belief Models, which can successfully fuse information fromthese two sensors, to form meaningful concepts and resolve ambiguity. We alsoanalyze this system for robustness under various operating scenarios.
【 预 览 】
附件列表
Files
Size
Format
View
Evidential Reasoning for Multimodal Fusion in Human Computer Interaction