Article Text

Download PDFPDF
Original research
Automated catheter segmentation and tip detection in cerebral angiography with topology-aware geometric deep learning
  1. Rahul Ghosh1,2,
  2. Kelvin Wong1,3,
  3. Yi Jonathan Zhang4,
  4. Gavin W Britz5,6,
  5. Stephen T C Wong1,3
  1. 1 Systems Medicine and Bioengineering, Houston Methodist Research Institute, Houston, Texas, USA
  2. 2 Biomedical Engineering, Texas A&M University System, College Station, Texas, USA
  3. 3 Texas A&M University School of Medicine, Bryan, Texas, USA
  4. 4 Neurological Surgery, Queen's Medical Center, Honolulu, Hawaii, USA
  5. 5 Neurological Surgery, Houston Methodist Hospital, Houston, Texas, USA
  6. 6 Houston Methodist Neurological Institute, Houston, Texas, USA
  1. Correspondence to Dr Stephen T C Wong, Systems Medicine and Bioengineering, Houston Methodist Research Institute, Houston, TX 77030, USA; stwong{at}houstonmethodist.org; Dr Kelvin Wong, Departments of Radiology, Houston Methodist Hospital, 6670 Bertner Ave, Houston, TX, USA; kwong{at}houstonmethodist.org

Abstract

Background Visual perception of catheters and guidewires on x-ray fluoroscopy is essential for neurointervention. Endovascular robots with teleoperation capabilities are being developed, but they cannot ‘see’ intravascular devices, which precludes artificial intelligence (AI) augmentation that could improve precision and autonomy. Deep learning has not been explored for neurointervention and prior works in cardiovascular scenarios are inadequate as they only segment device tips, while neurointervention requires segmentation of the entire structure due to coaxial devices. Therefore, this study develops an automatic and accurate image-based catheter segmentation method in cerebral angiography using deep learning.

Methods Catheters and guidewires were manually annotated on 3831 fluoroscopy frames collected prospectively from 40 patients undergoing cerebral angiography. We proposed a topology-aware geometric deep learning method (TAG-DL) and compared it with the state-of-the-art deep learning segmentation models, UNet, nnUNet and TransUNet. All models were trained on frontal view sequences and tested on both frontal and lateral view sequences from unseen patients. Results were assessed with centerline Dice score and tip-distance error.

Results The TAG-DL and nnUNet models outperformed TransUNet and UNet. The best performing model was nnUNet, achieving a mean centerline-Dice score of 0.98 ±0.01 and a median tip-distance error of 0.43 (IQR 0.88) mm. Incorporating digital subtraction masks, with or without contrast, significantly improved performance on unseen patients, further enabling exceptional performance on lateral view fluoroscopy despite not being trained on this view.

Conclusions These results are the first step towards AI augmentation for robotic neurointervention that could amplify the reach, productivity, and safety of a limited neurointerventional workforce.

  • angiography
  • catheter
  • navigation
  • technology
  • technique

Data availability statement

Data are available upon reasonable request. The code and data that support the findings of this study are available from the corresponding author upon reasonable request, with consideration given to the sensitive clinical nature of the data.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Data availability statement

Data are available upon reasonable request. The code and data that support the findings of this study are available from the corresponding author upon reasonable request, with consideration given to the sensitive clinical nature of the data.

View Full Text

Footnotes

  • Twitter @ghoshrx

  • Contributors STCW, KW, and RG had full access to all data in the study and take responsibility for the data integrity and accuracy of the analysis. RG, KW, and STCW participated in the concept and design. RG, KW, GWB, and YJZ participated in the acquisition, analysis, or interpretation of data, as well as review of testing data. All authors were involved in drafting and critical revision of the article for important intellectual content. RG performed the statistical analysis. STCW obtained the funding. KW and STCW participated in administrative, technical, or material support. STCW, KW, and GWB acted as supervisors. STCW and KW are guarantors of this work.

  • Funding This work was supported by the Ting Tsung & Wei Fong Chao Center for BRAIN (STCW), the John S Dunn Research Foundation (STCW).

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.