| Chinese Journal of Mechanical Engineering | |
| Adaptive Multi-modal Fusion Instance Segmentation for CAEVs in Complex Conditions: Dataset, Framework and Verifications | |
| Pai Peng1  Weichao Zhuang1  Keke Geng1  Shuaipeng Liu1  Yanbo Lu1  Guodong Yin1  | |
| [1] School of Mechanical Engineering, Southeast University, Nanjing, China; | |
| 关键词: Connected autonomous electrified vehicles; Multi-modal fusion; Semi-automatic annotation; Sharpening mixture of experts; Comparative experiments; | |
| DOI : 10.1186/s10033-021-00602-2 | |
| 来源: Springer | |
PDF
|
|
【 摘 要 】
Current works of environmental perception for connected autonomous electrified vehicles (CAEVs) mainly focus on the object detection task in good weather and illumination conditions, they often perform poorly in adverse scenarios and have a vague scene parsing ability. This paper aims to develop an end-to-end sharpening mixture of experts (SMoE) fusion framework to improve the robustness and accuracy of the perception systems for CAEVs in complex illumination and weather conditions. Three original contributions make our work distinctive from the existing relevant literature. The Complex KITTI dataset is introduced which consists of 7481 pairs of modified KITTI RGB images and the generated LiDAR dense depth maps, and this dataset is fine annotated in instance-level with the proposed semi-automatic annotation method. The SMoE fusion approach is devised to adaptively learn the robust kernels from complementary modalities. Comprehensive comparative experiments are implemented, and the results show that the proposed SMoE framework yield significant improvements over the other fusion techniques in adverse environmental conditions. This research proposes a SMoE fusion framework to improve the scene parsing ability of the perception systems for CAEVs in adverse conditions.
【 授权许可】
CC BY
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| RO202109174488742ZK.pdf | 1899KB |
PDF