| BioMedical Engineering OnLine | |
| Object recognition and localization enhancement in visual prostheses: a real-time mixed reality simulation | |
| Research | |
| Olaf Hellwich1  Slim Abdennadher2  Seif Eldawlatly3  Reham H. Elnabawy4  | |
| [1] Chair of Computer Vision and Remote Sensing, Technische Universität Berlin, Berlin, Germany;Computer Science and Engineering Department, Faculty of Media Engineering and Technology, German University in Cairo, Cairo, Egypt;Computer Science Department, Faculty of Informatics and Computer Science, German International University, New Administrative Capital, Egypt;Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbassia, Cairo, Egypt;Computer Science and Engineering Department, The American University in Cairo, Cairo, Egypt;Digital Media Engineering and Technology Department, Faculty of Media Engineering and Technology, German University in Cairo, Cairo, Egypt; | |
| 关键词: Simulated prosthetic vision; Object recognition; Object localization; Real-time mixed reality simulation; | |
| DOI : 10.1186/s12938-022-01059-7 | |
| received in 2022-07-31, accepted in 2022-12-12, 发布年份 2022 | |
| 来源: Springer | |
PDF
|
|
【 摘 要 】
Blindness is a main threat that affects the daily life activities of any human. Visual prostheses have been introduced to provide artificial vision to the blind with the aim of allowing them to restore confidence and independence. In this article, we propose an approach that involves four image enhancement techniques to facilitate object recognition and localization for visual prostheses users. These techniques are clip art representation of the objects, edge sharpening, corner enhancement and electrode dropout handling. The proposed techniques are tested in a real-time mixed reality simulation environment that mimics vision perceived by visual prostheses users. Twelve experiments were conducted to measure the performance of the participants in object recognition and localization. The experiments involved single objects, multiple objects and navigation. To evaluate the performance of the participants in objects recognition, we measure their recognition time, recognition accuracy and confidence level. For object localization, two metrics were used to measure the performance of the participants which are the grasping attempt time and the grasping accuracy. The results demonstrate that using all enhancement techniques simultaneously gives higher accuracy, higher confidence level and less time for recognizing and grasping objects in comparison to not applying the enhancement techniques or applying pair-wise combinations of them. Visual prostheses could benefit from the proposed approach to provide users with an enhanced perception.
【 授权许可】
CC BY
© The Author(s) 2022
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| RO202305064995339ZK.pdf | 3110KB | ||
| Fig. 1 | 866KB | Image | |
| Fig. 3 | 133KB | Image | |
| Fig. 1 | 461KB | Image | |
| 12982_2022_119_Article_IEq2.gif | 1KB | Image | |
| 12982_2022_119_Article_IEq4.gif | 1KB | Image | |
| 12982_2022_119_Article_IEq18.gif | 1KB | Image | |
| 12982_2022_119_Article_IEq40.gif | 1KB | Image | |
| 12888_2022_4137_Article_IEq2.gif | 1KB | Image | |
| 12888_2022_4392_Article_IEq2.gif | 1KB | Image | |
| 12982_2022_119_Article_IEq53.gif | 1KB | Image | |
| 12982_2022_119_Article_IEq55.gif | 1KB | Image | |
| 12982_2022_119_Article_IEq57.gif | 1KB | Image | |
| 12982_2022_119_Article_IEq62.gif | 1KB | Image |
【 图 表 】
12982_2022_119_Article_IEq62.gif
12982_2022_119_Article_IEq57.gif
12982_2022_119_Article_IEq55.gif
12982_2022_119_Article_IEq53.gif
12888_2022_4392_Article_IEq2.gif
12888_2022_4137_Article_IEq2.gif
12982_2022_119_Article_IEq40.gif
12982_2022_119_Article_IEq18.gif
12982_2022_119_Article_IEq4.gif
12982_2022_119_Article_IEq2.gif
Fig. 1
Fig. 3
Fig. 1
【 参考文献 】
- [1]
- [2]
- [3]
- [4]
- [5]
- [6]
- [7]
- [8]
- [9]
- [10]
- [11]
- [12]
- [13]
- [14]
- [15]
- [16]
- [17]
- [18]
- [19]
- [20]
- [21]
- [22]
- [23]
- [24]
- [25]
- [26]
- [27]
- [28]
- [29]
- [30]
- [31]
- [32]
- [33]
- [34]
- [35]
- [36]
PDF