期刊论文详细信息
Frontiers in Medicine
Brain Tumor Segmentation via Multi-Modalities Interactive Feature Learning
Jingyang Ai1  Bo Yang2  Hong Peng3  Lin Ma3  Lihua An4  Jingyi Yang5  Zheng You6  Bo Wang7 
[1] Beijing Jingzhen Medical Technology Ltd., Beijing, China;China Institute of Marine Technology & Economy, Beijing, China;Department of Radiology, The 1st Medical Center, Chinese PLA General Hospital, Beijing, China;Radiology Department, Affiliated Hospital of Jining Medical University, Jining, China;School of Artificial Intelligence, Xidian University, Xi'an, China;The State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China;The State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China;Beijing Jingzhen Medical Technology Ltd., Beijing, China;
关键词: brain tumor segmentation;    deep neural network;    multi-modality learning;    feature fusion;    attention mechanism;   
DOI  :  10.3389/fmed.2021.653925
来源: Frontiers
PDF
【 摘 要 】

Automatic segmentation of brain tumors from multi-modalities magnetic resonance image data has the potential to enable preoperative planning and intraoperative volume measurement. Recent advances in deep convolutional neural network technology have opened up an opportunity to achieve end-to-end segmenting the brain tumor areas. However, the medical image data used in brain tumor segmentation are relatively scarce and the appearance of brain tumors is varied, so that it is difficult to find a learnable pattern to directly describe tumor regions. In this paper, we propose a novel cross-modalities interactive feature learning framework to segment brain tumors from the multi-modalities data. The core idea is that the multi-modality MR data contain rich patterns of the normal brain regions, which can be easily captured and can be potentially used to detect the non-normal brain regions, i.e., brain tumor regions. The proposed multi-modalities interactive feature learning framework consists of two modules: cross-modality feature extracting module and attention guided feature fusing module, which aim at exploring the rich patterns cross multi-modalities and guiding the interacting and the fusing process for the rich features from different modalities. Comprehensive experiments are conducted on the BraTS 2018 benchmark, which show that the proposed cross-modality feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202107133761340ZK.pdf 1527KB PDF download
  文献评价指标  
  下载次数:8次 浏览次数:4次