期刊论文详细信息
Frontiers in Neuroscience
Surrounding-aware representation prediction in Birds-Eye-View using transformers
Neuroscience
Jiahui Yu1  Yutong Zhang1  Rui Huang1  Yongquan Chen1  Wenli Zheng2 
[1] Shenzhen Institute of Artificial Intelligence and Robotics for Society, and the SSE/IRIM, The Chinese University of Hong Kong, Shenzhen, Guangdong, China;The Shenzhen Academy of Inspection Quarantine, Shenzhen, Guangdong, China;
关键词: BEV maps;    deep learning;    attention;    transformers;    autonomous driving;   
DOI  :  10.3389/fnins.2023.1219363
 received in 2023-05-09, accepted in 2023-06-13,  发布年份 2023
来源: Frontiers
PDF
【 摘 要 】

Birds-Eye-View (BEV) maps provide an accurate representation of sensory cues present in the surroundings, including dynamic and static elements. Generating a semantic representation of BEV maps can be a challenging task since it relies on object detection and image segmentation. Recent studies have developed Convolutional Neural networks (CNNs) to tackle the underlying challenge. However, current CNN-based models encounter a bottleneck in perceiving subtle nuances of information due to their limited capacity, which constrains the efficiency and accuracy of representation prediction, especially for multi-scale and multi-class elements. To address this issue, we propose novel neural networks for BEV semantic representation prediction that are built upon Transformers without convolution layers in a significantly different way from existing pure CNNs and hybrid architectures that merge CNNs and Transformers. Given a sequence of image frames as input, the proposed neural networks can directly output the BEV maps with per-class probabilities in end-to-end forecasting. The core innovations of the current study contain (1) a new pixel generation method powered by Transformers, (2) a novel algorithm for image-to-BEV transformation, and (3) a novel network for image feature extraction using attention mechanisms. We evaluate the proposed Models performance on two challenging benchmarks, the NuScenes dataset and the Argoverse 3D dataset, and compare it with state-of-the-art methods. Results show that the proposed model outperforms CNNs, achieving a relative improvement of 2.4 and 5.2% on the NuScenes and Argoverse 3D datasets, respectively.

【 授权许可】

Unknown   
Copyright © 2023 Yu, Zheng, Chen, Zhang and Huang.

【 预 览 】
附件列表
Files Size Format View
RO202310107417105ZK.pdf 2464KB PDF download
  文献评价指标  
  下载次数:8次 浏览次数:0次