Sensors | |
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model | |
Youngjoon Han1  Qing Lin1  | |
[1] Electronic Engineering Department, Soongsil University, 511 Sangdo-Dong, Dongjak-Gu,Seoul 156-743, Korea; | |
关键词: electronic mobility aids; sensor fusion; object detection; Bayesian network; context-aware guidance; multimodal information transformation; | |
DOI : 10.3390/s141018670 | |
来源: DOAJ |
【 摘 要 】
A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility.
【 授权许可】
Unknown