期刊论文详细信息
BMC Bioinformatics
VANet: a medical image fusion model based on attention mechanism to assist disease diagnosis
Research
Tiehu Fan1  Xiaohan Hu2  Xiongfei Li3  Kai Guo3 
[1] College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China;Department of Radiology, The First Hospital of Jilin University, Changchun, China;Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China;College of Computer Science and Technology, Jilin University, Changchun, China;
关键词: Medical image;    Medical image fusion;    Attention mechanism;    Contextual information;    Multi scale feature extraction;   
DOI  :  10.1186/s12859-022-05072-4
 received in 2022-05-06, accepted in 2022-11-22,  发布年份 2022
来源: Springer
PDF
【 摘 要 】

BackgroundToday’s biomedical imaging technology has been able to present the morphological structure or functional metabolic information of organisms at different scale levels, such as organ, tissue, cell, molecule and gene. However, different imaging modes have different application scope, advantages and disadvantages. In order to improve the role of medical image in disease diagnosis, the fusion of biomedical image information at different imaging modes and scales has become an important research direction in medical image. Traditional medical image fusion methods are all designed to measure the activity level and fusion rules. They are lack of mining the context features of different modes of image, which leads to the obstruction of improving the quality of fused images.MethodIn this paper, an attention-multiscale network medical image fusion model based on contextual features is proposed. The model selects five backbone modules in the VGG-16 network to build encoders to obtain the contextual features of medical images. It builds the attention mechanism branch to complete the fusion of global contextual features and designs the residual multiscale detail processing branch to complete the fusion of local contextual features. Finally, it completes the cascade reconstruction of features by the decoder to obtain the fused image.ResultsTen sets of images related to five diseases are selected from the AANLIB database to validate the VANet model. Structural images are derived from MR images with high resolution and functional images are derived from SPECT and PET images that are good at describing organ blood flow levels and tissue metabolism. Fusion experiments are performed on twelve fusion algorithms including the VANet model. The model selects eight metrics from different aspects to build a fusion quality evaluation system to complete the performance evaluation of the fused images. Friedman’s test and the post-hoc Nemenyi test are introduced to conduct professional statistical tests to demonstrate the superiority of VANet model.ConclusionsThe VANet model completely captures and fuses the texture details and color information of the source images. From the fusion results, the metabolism and structural information of the model are well expressed and there is no interference of color information on the structure and texture; in terms of the objective evaluation system, the metric value of the VANet model is generally higher than that of other methods.; in terms of efficiency, the time consumption of the model is acceptable; in terms of scalability, the model is not affected by the input order of source images and can be extended to tri-modal fusion.

【 授权许可】

CC BY   
© The Author(s) 2022

【 预 览 】
附件列表
Files Size Format View
RO202305061688809ZK.pdf 4086KB PDF download
Fig. 1 884KB Image download
Fig. 1 111KB Image download
Fig. 1 (abstract P46). 228KB Image download
Fig. 5 1457KB Image download
Fig. 2 269KB Image download
Fig. 2 232KB Image download
Fig. 4 123KB Image download
Fig. 2 277KB Image download
Fig. 3 533KB Image download
Fig. 9 2022KB Image download
Fig. 1 752KB Image download
Fig. 2 851KB Image download
Fig. 2 164KB Image download
Fig. 1 154KB Image download
Fig. 2 462KB Image download
Fig. 4 4008KB Image download
40708_2022_178_Article_IEq64.gif 1KB Image download
MediaObjects/12974_2022_2652_MOESM3_ESM.pdf 4626KB PDF download
MediaObjects/40249_2022_1050_MOESM4_ESM.docx 21KB Other download
Fig. 5 326KB Image download
Fig. 6 2361KB Image download
Fig. 1 1050KB Image download
Fig. 3 156KB Image download
13731_2022_257_Article_IEq2.gif 1KB Image download
Fig. 3 643KB Image download
13731_2022_257_Article_IEq4.gif 1KB Image download
Fig. 1 192KB Image download
Fig. 2 462KB Image download
MediaObjects/12888_2022_4468_MOESM1_ESM.tif 2493KB Other download
Fig. 6 1342KB Image download
MediaObjects/12974_2022_2652_MOESM4_ESM.pdf 29720KB PDF download
Fig. 2 400KB Image download
Fig. 1 331KB Image download
Fig. 8 2342KB Image download
Fig. 6 1394KB Image download
Fig. 2 144KB Image download
MediaObjects/40249_2022_1047_MOESM2_ESM.docx 14KB Other download
Fig. 1 120KB Image download
【 图 表 】

Fig. 1

Fig. 2

Fig. 6

Fig. 8

Fig. 1

Fig. 2

Fig. 6

Fig. 2

Fig. 1

13731_2022_257_Article_IEq4.gif

Fig. 3

13731_2022_257_Article_IEq2.gif

Fig. 3

Fig. 1

Fig. 6

Fig. 5

40708_2022_178_Article_IEq64.gif

Fig. 4

Fig. 2

Fig. 1

Fig. 2

Fig. 2

Fig. 1

Fig. 9

Fig. 3

Fig. 2

Fig. 4

Fig. 2

Fig. 2

Fig. 5

Fig. 1 (abstract P46).

Fig. 1

Fig. 1

【 参考文献 】
  • [1]
  • [2]
  • [3]
  • [4]
  • [5]
  • [6]
  • [7]
  • [8]
  • [9]
  • [10]
  • [11]
  • [12]
  • [13]
  • [14]
  • [15]
  • [16]
  • [17]
  • [18]
  • [19]
  • [20]
  • [21]
  • [22]
  • [23]
  • [24]
  • [25]
  • [26]
  • [27]
  • [28]
  • [29]
  • [30]
  • [31]
  • [32]
  • [33]
  • [34]
  • [35]
  • [36]
  文献评价指标  
  下载次数:1次 浏览次数:1次