期刊论文详细信息
Healthcare Technology Letters
Augmented reality visualisation for orthopaedic surgical guidance with pre- and intra-operative multimodal image data fusion
Antony J. Hodgson1  Rafeef Garbi1  Houssam El-Hariri1  Prashant Pandey1 
[1] University of British Columbia;
关键词: computerised tomography;    medical image processing;    surgery;    phantoms;    data visualisation;    image segmentation;    orthopaedics;    augmented reality;    bone;    image registration;    biomedical ultrasonics;    image fusion;    helmet mounted displays;    tracked ultrasound;    operated surgical scene;    bone surface segmentation;    multimodal volume registration;    corresponding intra-operative data;    enhanced surgical scene;    augmented reality visualisation;    orthopaedic surgical guidance;    minimally invasive surgical applications;    intuitive visualisation;    naturally immersive visualisation;    three-dimensional imaging data;    operator ergonomics;    reduced fatigue;    head-mounted AR displays;    surgical navigation;    bone structures;    preoperative computed tomography data;    intraoperative multimodal image data fusion;    healthcare;    simplified hand-eye coordination;    3D AR visualisation;    optically-tracked US;    CT image volumes;    HoloLens;    foam pelvis phantom;    fiducial marker locations;    root mean square errors;   
DOI  :  10.1049/htl.2018.5061
来源: DOAJ
【 摘 要 】

Augmented reality (AR) has proven to be a useful, exciting technology in several areas of healthcare. AR may especially enhance the operator's experience in minimally invasive surgical applications by providing more intuitive and naturally immersive visualisation in those procedures which heavily rely on three-dimensional (3D) imaging data. Benefits include improved operator ergonomics, reduced fatigue, and simplified hand–eye coordination. Head-mounted AR displays may hold great potential for enhancing surgical navigation given their compactness and intuitiveness of use. In this work, the authors propose a method that can intra-operatively locate bone structures using tracked ultrasound (US), registers to the corresponding pre-operative computed tomography (CT) data and generates 3D AR visualisation of the operated surgical scene through a head-mounted display. The proposed method deploys optically-tracked US, bone surface segmentation from the US and CT image volumes, and multimodal volume registration to align pre-operative to the corresponding intra-operative data. The enhanced surgical scene is then visualised in an AR framework using a HoloLens. They demonstrate the method's utility using a foam pelvis phantom and quantitatively assess accuracy by comparing the locations of fiducial markers in the real and virtual spaces, yielding root mean square errors of 3.22, 22.46, and 28.30 mm in the x, y, and z directions, respectively.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:3次