期刊论文详细信息
EJNMMI Physics
A review of PET attenuation correction methods for PET-MR
Review
Paul Marsden1  Georgios Krokos1  Joel Dunn1  Jane MacKewn1 
[1] School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas’ Hospital London, King’s College London, 1st Floor Lambeth Wing, Westminster Bridge Road, SE1 7EH, London, UK;
关键词: PET-MR;    Attenuation correction;    Dixon;    UTE;    MLAA;    Atlas-based attenuation correction;    Deep-learning-based attenuation correction;    Pseudo-CT;   
DOI  :  10.1186/s40658-023-00569-0
 received in 2023-04-14, accepted in 2023-08-07,  发布年份 2023
来源: Springer
PDF
【 摘 要 】

Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.

【 授权许可】

CC BY   
© Springer Nature Switzerland AG 2023

【 预 览 】
附件列表
Files Size Format View
RO202310113271045ZK.pdf 1965KB PDF download
Fig. 2 1401KB Image download
Chart 2 118KB Image download
Fig. 4 557KB Image download
【 图 表 】

Fig. 4

Chart 2

Fig. 2

【 参考文献 】
  • [1]
  • [2]
  • [3]
  • [4]
  • [5]
  • [6]
  • [7]
  • [8]
  • [9]
  • [10]
  • [11]
  • [12]
  • [13]
  • [14]
  • [15]
  • [16]
  • [17]
  • [18]
  • [19]
  • [20]
  • [21]
  • [22]
  • [23]
  • [24]
  • [25]
  • [26]
  • [27]
  • [28]
  • [29]
  • [30]
  • [31]
  • [32]
  • [33]
  • [34]
  • [35]
  • [36]
  • [37]
  • [38]
  • [39]
  • [40]
  • [41]
  • [42]
  • [43]
  • [44]
  • [45]
  • [46]
  • [47]
  • [48]
  • [49]
  • [50]
  • [51]
  • [52]
  • [53]
  • [54]
  • [55]
  • [56]
  • [57]
  • [58]
  • [59]
  • [60]
  • [61]
  • [62]
  • [63]
  • [64]
  • [65]
  • [66]
  • [67]
  • [68]
  • [69]
  • [70]
  • [71]
  • [72]
  • [73]
  • [74]
  • [75]
  • [76]
  • [77]
  • [78]
  • [79]
  • [80]
  • [81]
  • [82]
  • [83]
  • [84]
  • [85]
  • [86]
  • [87]
  • [88]
  • [89]
  • [90]
  • [91]
  • [92]
  • [93]
  • [94]
  • [95]
  • [96]
  • [97]
  • [98]
  • [99]
  • [100]
  • [101]
  • [102]
  • [103]
  • [104]
  • [105]
  • [106]
  • [107]
  • [108]
  • [109]
  • [110]
  • [111]
  • [112]
  • [113]
  • [114]
  • [115]
  • [116]
  • [117]
  • [118]
  • [119]
  • [120]
  • [121]
  • [122]
  • [123]
  • [124]
  • [125]
  • [126]
  • [127]
  • [128]
  • [129]
  • [130]
  • [131]
  • [132]
  • [133]
  • [134]
  • [135]
  • [136]
  • [137]
  • [138]
  • [139]
  • [140]
  • [141]
  • [142]
  • [143]
  • [144]
  • [145]
  • [146]
  • [147]
  • [148]
  • [149]
  • [150]
  • [151]
  • [152]
  • [153]
  • [154]
  • [155]
  • [156]
  • [157]
  • [158]
  • [159]
  • [160]
  • [161]
  • [162]
  • [163]
  • [164]
  • [165]
  • [166]
  • [167]
  • [168]
  • [169]
  • [170]
  • [171]
  • [172]
  • [173]
  • [174]
  • [175]
  • [176]
  • [177]
  • [178]
  • [179]
  • [180]
  • [181]
  • [182]
  • [183]
  • [184]
  • [185]
  • [186]
  • [187]
  • [188]
  • [189]
  • [190]
  • [191]
  • [192]
  • [193]
  • [194]
  • [195]
  • [196]
  • [197]
  • [198]
  • [199]
  • [200]
  • [201]
  • [202]
  • [203]
  • [204]
  • [205]
  • [206]
  • [207]
  • [208]
  • [209]
  • [210]
  • [211]
  • [212]
  • [213]
  • [214]
  • [215]
  • [216]
  • [217]
  • [218]
  • [219]
  • [220]
  • [221]
  • [222]
  • [223]
  • [224]
  • [225]
  • [226]
  • [227]
  • [228]
  • [229]
  • [230]
  • [231]
  • [232]
  • [233]
  • [234]
  • [235]
  • [236]
  • [237]
  • [238]
  • [239]
  • [240]
  • [241]
  • [242]
  • [243]
  • [244]
  • [245]
  • [246]
  • [247]
  • [248]
  • [249]
  • [250]
  • [251]
  • [252]
  • [253]
  • [254]
  • [255]
  • [256]
  • [257]
  • [258]
  • [259]
  • [260]
  • [261]
  • [262]
  • [263]
  • [264]
  • [265]
  • [266]
  • [267]
  • [268]
  • [269]
  • [270]
  • [271]
  • [272]
  • [273]
  • [274]
  • [275]
  • [276]
  • [277]
  • [278]
  • [279]
  • [280]
  • [281]
  • [282]
  • [283]
  • [284]
  • [285]
  • [286]
  • [287]
  • [288]
  • [289]
  • [290]
  • [291]
  • [292]
  • [293]
  • [294]
  • [295]
  • [296]
  • [297]
  • [298]
  • [299]
  • [300]
  • [301]
  • [302]
  • [303]
  • [304]
  • [305]
  • [306]
  • [307]
  • [308]
  文献评价指标  
  下载次数:0次 浏览次数:0次