Article Text

Download PDFPDF
Correspondence on: ‘Viz LVO versus Rapid LVO in detection of large vessel occlusion on CT angiography for acute stroke’ by Delora et al
  1. Vivek S Yedavalli1,
  2. Seena Dehkharghani2,
  3. Jonathan Clemente3
  1. 1Radiology, Johns Hopkins Medicine, Baltimore, Maryland, USA
  2. 2Radiology, NYU Langone Health, New York, New York, USA
  3. 3Atrium Health, Charlotte, North Carolina, USA
    1. Correspondence to Dr Vivek S Yedavalli; vyedava1{at}jhmi.edu

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    We appreciate the effort by Delora et al to help identify sources of false positives and false negatives in AI software.1 We were surprised to see that the software performance reported differed substantially from previous reports and from the Food and Drug Administration (FDA) clearance documents for both the Rapid AI and Viz AI software. In previous studies, the average sensitivity and specificity for the Rapid AI large vessel occlusion (LVO) software has been 94% and 90%, respectively.2–4 Results of prior Viz LVO studies show an average sensitivity of 92% and specificity of 87%.5–7 FDA 510K clearances cite 96% sensitivity and 98% specificity for Rapid LVO and 88% …

    View Full Text

    Footnotes

    • X @vsyedavalli

    • Contributors VY is the corresponding author and guarantor. SD and JC edited the paper and approved submission.

    • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

    • Competing interests Vivek Yedavalli, Seena Dekharghani, and Jonathan Clemente are consultants for RAPIDAI.

    • Provenance and peer review Not commissioned; internally peer reviewed.

    Linked Articles