Article Text

Download PDFPDF
Letter to the Editor
Could we clinicians be the greatest barrier to real progress in our field?
  1. T E Darsaut1,
  2. R Fahed2,
  3. J Raymond2,3
  1. 1Division of Neurosurgery, Department of Surgery, University of Alberta Hospital, Mackenzie Health Sciences Centre, Edmonton, Alberta, Canada
  2. 2Laboratory of Interventional Neuroradiology, Centre Hospitalier de l'Université de Montréal, Notre-Dame Hospital Research Centre (CRCHUM), Montreal, Quebec, Canada
  3. 3Department of Radiology, Centre Hospitalier de l'Université de Montréal (CHUM), Notre-Dame Hospital, Montreal, Quebec, Canada
  1. Correspondence to Dr Jean Raymond, Department of Radiology, CHUM, Notre-Dame Hospital, 1560 Sherbrooke East, Pavilion Simard, suite Z12909, Montreal, QC, Canada H2L 4M1; jean.raymond{at}umontreal.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In a recent editorial, Fiorella et al1 marvel at the recent maturation of the neurointerventional surgical field, which they attribute to what they claim are ‘two seemingly diametrically opposed factors’: industry-driven research and evidence-based practice (emphasis ours).1 The thesis of the editorial is that, if randomized controlled trials (RCTs) are the gold standard, they are not feasible in many circumstances.

Here is the list of situations for which RCTs are allegedly not ‘feasible’: ‘At the introduction of a new device’; ‘When devices are designed to treat diseases that have a poor natural history with standard management’; ‘When no suitable control group exists’; ‘When iterative technologies emerge to compete with existing technology’; ‘When the disease is insufficiently prevalent for an RCT to be completed’; ‘When there is no market to support an industry-sponsored trial’; and ‘When treatment is proven for the same disease in a different patient population’.1

Are there any indications left? When should our community properly test innovative treatments against the standard management that has existed up until then? If not at the introduction of the innovation, when uncertainty is maximal; if not later in the process, when clinicians are ‘accumulating critical case experience’, and not even when a second iteration emerges, then when? According to the authors, it is too late when ‘equipoise no longer exists’. Thus we are invited to consider innovative solutions. Unfortunately, the small case series with historical comparisons that the authors propose can hardly qualify as innovative: they are the very methods which have previously misled us and that …

View Full Text

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles

  • Editor's column
    David Fiorella J Mocco Adam Athur Adnan Siddiqui Don Heck Felipe Albuquerque Aquilla Turk