期刊论文详细信息
Healthcare Technology Letters
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
article
Sing Chun Lee1  Bernhard Fuerst2  Keisuke Tateno3  Alex Johnson5  Javad Fotouhi1  Greg Osgood5  Federico Tombari3  Nassir Navab1 
[1]Laboratory for Computational Sensing & Robotics, Johns Hopkins University
[2]Verb Surgical
[3]Technische Universität München
[4]Canon Inc.
[5]Department of Orthopaedic Surgery, Johns Hopkins Hospital
关键词: orthopaedics;    surgery;    medical image processing;    diagnostic radiography;    bone;    image fusion;    computerised tomography;    iterative methods;    image reconstruction;    image segmentation;    image registration;    multimodal imaging;    mixed reality visualisation;    orthopaedic surgery;    workflow;    two-dimensional fluoroscopic images;    complex 3D structures;    pelvis;    multimodal data fusion;    model-based surgical tool tracking;    mixed reality environment;    screw placement;    red-green-blue-depth camera;    mobile C-arm;    cone-beam computed tomography imaging space;    iterative closest point algorithm;    real-time automatic fusion;    reconstructed surface;    3D point clouds;    synthetic fluoroscopic images;    CBCT imaging;    adapted 3D model-based tracking algorithm;    automatic tool segmentation;    surgical tools;    interactive 3D mixed reality environment;    entry point;    target registration error;    tracking accuracy;    partial occlusion;   
DOI  :  10.1049/htl.2017.0066
学科分类:肠胃与肝脏病学
来源: Wiley
PDF
【 摘 要 】
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
【 授权许可】

CC BY|CC BY-ND|CC BY-NC|CC BY-NC-ND   

【 预 览 】
附件列表
Files Size Format View
RO202107100000979ZK.pdf 213KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次