期刊论文详细信息
IEEE Access
Toward Encoding Vision-to-Touch With Convolutional Neural Networks
Mauricio Marengoni1  Ricardo Ribani1  Rodrigo Freitas Lima2 
[1] Universidade Presbiteriana Mackenzie, S&x00E3;o Paulo, Brazil;
关键词: Assistive technology;    computer vision;    haptic interfaces;    neural networks;    sensory substitution;   
DOI  :  10.1109/ACCESS.2019.2951614
来源: DOAJ
【 摘 要 】

The task of encoding visual information into tactile information has been studied since the 1960s. There is still an open challenge in converting the data of an image into a small set of signals that will be sent to the user as tactile input. In this study, we evaluated two methods that have never been used for encoding vision-to-touch using convolutional neural networks, a bag of convolutional features (BoF) and a vector of locally aggregated descriptors (VLAD). We also present here a very new method for evaluating the semantic property of the encoded signal by taking the idea that objects with similar features must have similar signals in the tactile interface; we created a semantic property evaluation (SPE) metric. Using this metric, we proved the advantage of using the BoF and VLAD methods, obtaining an SPE of 70.7% and 64.5%, respectively, which is a considerable improvement over the downscaling method used by many systems such as BrainPort, with 56.2%.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次