PT - JOURNAL ARTICLE AU - Ghosh, Rahul AU - Wong, Kelvin AU - Zhang, Yi Jonathan AU - Britz, Gavin W AU - Wong, Stephen T C TI - Automated catheter segmentation and tip detection in cerebral angiography with topology-aware geometric deep learning AID - 10.1136/jnis-2023-020300 DP - 2024 Mar 01 TA - Journal of NeuroInterventional Surgery PG - 290--295 VI - 16 IP - 3 4099 - http://jnis.bmj.com/content/16/3/290.short 4100 - http://jnis.bmj.com/content/16/3/290.full SO - J NeuroIntervent Surg2024 Mar 01; 16 AB - Background Visual perception of catheters and guidewires on x-ray fluoroscopy is essential for neurointervention. Endovascular robots with teleoperation capabilities are being developed, but they cannot ‘see’ intravascular devices, which precludes artificial intelligence (AI) augmentation that could improve precision and autonomy. Deep learning has not been explored for neurointervention and prior works in cardiovascular scenarios are inadequate as they only segment device tips, while neurointervention requires segmentation of the entire structure due to coaxial devices. Therefore, this study develops an automatic and accurate image-based catheter segmentation method in cerebral angiography using deep learning.Methods Catheters and guidewires were manually annotated on 3831 fluoroscopy frames collected prospectively from 40 patients undergoing cerebral angiography. We proposed a topology-aware geometric deep learning method (TAG-DL) and compared it with the state-of-the-art deep learning segmentation models, UNet, nnUNet and TransUNet. All models were trained on frontal view sequences and tested on both frontal and lateral view sequences from unseen patients. Results were assessed with centerline Dice score and tip-distance error.Results The TAG-DL and nnUNet models outperformed TransUNet and UNet. The best performing model was nnUNet, achieving a mean centerline-Dice score of 0.98 ±0.01 and a median tip-distance error of 0.43 (IQR 0.88) mm. Incorporating digital subtraction masks, with or without contrast, significantly improved performance on unseen patients, further enabling exceptional performance on lateral view fluoroscopy despite not being trained on this view.Conclusions These results are the first step towards AI augmentation for robotic neurointervention that could amplify the reach, productivity, and safety of a limited neurointerventional workforce.Data are available upon reasonable request. The code and data that support the findings of this study are available from the corresponding author upon reasonable request, with consideration given to the sensitive clinical nature of the data.