期刊论文详细信息
Sensors
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
Qing Lin1 
[1] Electronic Engineering Department, Soongsil University, 511 Sangdo-Dong, Dongjak-Gu, Seoul 156-743, Korea; E-Mail
关键词: electronic mobility aids;    sensor fusion;    object detection;    Bayesian network;    context-aware guidance;    multimodal information transformation;   
DOI  :  10.3390/s141018670
来源: mdpi
PDF
【 摘 要 】

A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility.

【 授权许可】

CC BY   
© 2014 by the authors; licensee MDPI, Basel, Switzerland.

【 预 览 】
附件列表
Files Size Format View
RO202003190021097ZK.pdf 2338KB PDF download
  文献评价指标  
  下载次数:4次 浏览次数:3次