Article Text
Abstract
The field of neurointerventional (NI) surgery has developed in the context of technologic innovation. Many treatments readily provided in 2014 would have been hard to imagine as recently as 10 years ago. The reality of present day NI care is that, while providers, payors, policy makers and patients rely on evidence to guide NI decision-making, the available data are often less robust than participants might desire. In this paper we will explore the fundamentals of evidence-based clinical practice.
- History
Statistics from Altmetric.com
Introduction
In 2014 perhaps the most frequently discussed research initiatives are evidence-based medicine (EBM) and comparative effectiveness research (CER).1 This paper will review EBM and describe a relevant variant for neurointerventional (NI) providers: evidence-based clinical practice.
EBM implementation rationale hinges upon patient care improvement through applied clinical decision-making in diagnosis and treatment. EBM, as its name indicates, refers to the process of evaluating the currently available scientific, epidemiological and statistical evidence and then apply the resulting conclusions to clinical decision-making and practice.2 An acknowledged challenge for clinical decision-making is that it is difficult to predict whether the available data apply to a specific patient (ie, whether the patient resembles the relevant study population).3 Put differently, while it is difficult to be certain where a patient falls within a bell-shaped curve, evidence allows a practitioner to make the best possible estimation. For medical purposes, evidence can derive from any level of data or information and can be obtained through experience, observational studies or experimental research.4 EBM endeavors to systematize knowledge and stresses the criticality of evidence from clinical research.1
EBM also has drawn widespread attention in many circles including the Institute of Medicine (IOM).5 In ‘Crossing the Quality Chasm,’ the IOM called attention to the challenges that healthcare participants have in applying new developments to the day-to-day practice of medicine. In so doing, the IOM demonstrates its support of EBM. In addition, in 2011 the IOM re-engineered its definition of clinical guidelines from an earlier definition published in 1990.6 ,7 Further, the IOM has published multiple manuscripts including methodology to assist those conducting systematic reviews.8 As defined by David Sackett, EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.9 Others have defined it more specifically as “the use of mathematical estimates of the risk of benefit and harm, derived from high quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients”.10 The practice of EBM integrates the physician's individual clinical expertise with the best available external clinical evidence.8–14 While EBM continues to disseminate widely, criticism has included the idea that its universal application might suppress physician clinical freedom and restrict the ability of clinicians to alter treatment plans to address unique patient-specific problems which are often nuanced and not addressed by the body of existing clinical evidence.9–11 In part to broaden its application from individual patients to healthcare services in general, EBM has been called by various names including evidence-based practice (EBP) or evidence-informed healthcare or evidence-based healthcare. Consequently, EBP is an interdisciplinary approach to clinical practice. Evidence-based neurointerventional practice thus entails promoting health or providing care by integrating the best available evidence with practitioner expertise and other resources while simultaneously taking into account individual patient characteristics, values and preferences. The broad application of EBM includes rigorous analysis of published literature to synthesize high-quality evidence such as systematic reviews and preparation of clinical guidelines.6–8 The IOM has described systematic reviews as a tool to identify, select, assess and synthesize the findings of similar studies and to help clarify what is known and not known about the potential benefits and harms of drugs, devices and other health care services. The IOM also recently defined clinical practice guidelines as statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of available treatment options.6
Manuscripts or guidelines which incorporate EBM or CER methods can be prepared by any individual and, as such, have the potential to be misused. In neurointerventional surgery (NIS) the Society of NeuroInterventional Surgery (SNIS) guidelines are performed using the stated IOM guideline criteria with the strictest application of high-quality statistical methodology.15–17
EBM is often seen as a scientific tool for quality improvement, even though its application requires consideration of scientific facts along with value judgments and the cost of different treatments. As such, EBM exerts a fundamental influence on certain key aspects of medical professionalism. The actual value of evidence is related to its application and the circumstances in which and for whom it is used. Multiple intricate factors include the type of evidence reviewed, the methodology utilized, the knowledge and experience of the reviewers and many other factors including bias, self-interest, as well as financial factors. In order for clinicians to interpret the results of clinical research effectively, a formal set of rules must complement medical training and common sense.1 Consequently, knowing the tools of EBP is necessary—but not alone sufficient—for delivering the highest quality of patient care. It continues to be a challenge to balance EBM, CER and neurointerventional practice with new scientific innovations and the traditional methods of caring for the sick. The definition of EBM is used loosely and can include conducting a statistical meta-analysis of accumulated research, promoting randomized controlled trials (RCTs), supporting uniform reporting styles for research or having a personal orientation toward critical self-evaluations.
Historical aspects
While the roots of EBM are several hundred years old, it was formally defined in the late 1970s when a group of researchers in Canada's McMaster University authored a group of manuscripts on how to critically appraise scientific information.11
The term ‘evidence-based medicine’ first made an appearance in 1990 at McMaster University. It was a part of a packet of information supplied for entering residents. The term subsequently appeared in print in the ACP journal club in 1991.11 Subsequently, joined by academic doctors largely from the USA, they formed the first International EBM Working Group and published ‘The Users Guide to the Medical Literature,’ in JAMA between 1993 and 2000 as a 25-part series which still resonates today. These papers were later turned into a textbook on EBM.11 ,12
In 1993 the Cochrane Collaboration was created in response to Archie Cochrane's call for up-to-date systematic reviews of all relevant RCTs of healthcare and continues to publish quarterly systematic reviews.13 They are used by the National Health Service in the UK and, because of their high quality, elsewhere. By 2013 the Cochrane reviews contained approximately 5804 full reviews and another 2386 protocols for reviews in production.13
In the USA, federal efforts at EBM might be considered as having started with the short-lived National Center for Healthcare Technology. In 1972 the Office of Technology Assessment (OTA) was created as an advisory agency to Congress. Healthcare was one of the issues they covered. With elimination of OTA in 1995, the Agency for Healthcare Policy and Research (AHCPR) was created as an arm of the Department of Health and Human Services (DHHS) in 1989 during the presidency of George H W Bush.14 The agency's role was to enhance the quality and ultimately the effectiveness of healthcare services in the USA. This being an agency of government, the AHCPR prioritized areas that resulted in disproportionate government expenditures. They developed 19 clinical practice guidelines at the astronomical cost of $750 million.18 Ultimately, secondary to significant political pressure, the AHCPR morphed into the Agency for Healthcare Research and Quality (AHRQ) in December 1999.14 Their mission statement included the very straightforward proposition of promoting ‘quality research for quality healthcare’. Thus, the AHRQ attempts to facilitate the generation and appropriate application of evidence that can be utilized to enhance the quality of healthcare.
The Medicare Modernization Act (MMA) was a federal law of the USA, enacted in 2003 during the presidency of George H W Bush. It represented a tremendous change to Medicare.19 The MMA authorized AHRQ to spend up to $50 million in 2004 and additional amounts in future years to conduct and support research with a focus on ‘outcomes, comparative clinical effectiveness, and appropriateness of healthcare items and services’ for Medicare and Medicaid enrollees.19 Using that funding, AHRQ has established an ‘effective healthcare’ program.
Further funding in the American Recovery and Reimbursement Act (AARA) boosted EBM, the role of government and CER.20 The Patient Protection and Affordable Care Act (the ACA, for short), signed into law on March 23, 2010, created the Patient Centered Outcomes Research Institute (PCORI) resulting in moving CER further forward.21–25
Definition of EBM
As described earlier, EBM is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.9 In contrast to EBM, CER is defined as the generation and synthesis of evidence that compares the benefits and harms of alternate methods to prevent, diagnose, treat and monitor a clinical condition or to improve the delivery of care.1 EBM and CER are not based solely on randomized trials even though, in the hierarchy of clinical research, randomized trials are considered a higher level of evidence.
The authors of this review want to highlight that EBM should be seen as a comprehensive integration of the scientific evidence, not only data from RCTs. Clinician experience is a key consideration in patient-specific issues, and it is this amalgamation which best aids in clinical decision-making. EBM thus involves two basic principles. First, scientific evidence is important in clinical decision-making, but patients’ values should also be considered. Second, while evidence may carry different weights with an RCT being the highest, this hierarchy is not absolute.1 ,10 ,26
All definitions of EBM involve three overlapping processes:6 ,7
-
Systematic review of the available scientific studies
-
Integration of such scientific data with clinical experience
-
Patient participation in decision-making.
Hierarchy of evidence
EBM is informed by hierarchical evidence, and this hierarchy informs clinical decision-making. The descending order of evidentiary weight is: (1) systematic reviews of multiple high-quality randomized trials; (2) a single high-quality randomized trial; (3) systematic review of observational studies addressing patient-important outcomes; (4) single observational studies addressing patient-important outcomes; (5) physiologic studies; followed by (6) unsystematic clinical observations.12 It is important to reiterate that this hierarchy should not be viewed as absolute. Furthermore, it needs to be emphasized that the quality of a given trial's design and subsequent relevance to current practice must be considered. A poorly designed or executed RCT is no better and, because of misapplication, may well be worse than observational or single-arm trial data. It is important to recognize that, if treatment effects are sufficiently large and consistent, observational studies may provide more compelling evidence than RCTs, particularly in situations where RCTs are not feasible.12 However, unsystematic clinical observations are certainly more susceptible to quality variation and are often limited by small sample size and, more importantly, by deficiencies of inference.12
Discussion
Scientific studies are a critical component of evidence-based clinical practice. However, pertinent studies must be well constructed and durable; a poorly designed RCT should not overturn clinical experience and observational studies. In addition, evidence derived from RCTs is only directly applicable to those patients who would have qualified for inclusion within those trials and to those treatments offered within the trials. As such, evidence-based clinical practice must remain constantly adaptive, particularly within the context of rapidly evolving and technologically-driven subspecialties within medicine. For the majority of patients, clinical decision-making requires an extrapolation of the knowledge gained from RCTs (or other studies) which address similar (but not identical) scenarios. This extrapolation requires the application of clinical experience and represents the art of medicine.
Patient-centered NI care is a vision that can be realized. If the field is to move forward, it must advance through well thought-out research and continuously motivate good practice. At the same time, we must still promote the continued development of new techniques that have revolutionized the field of NI throughout its existence.
SNIS guidelines often employ the American Heart Association (AHA) Evidence-Based Scoring System. In this system, recommendations are classified from levels I to III using the paradigm summarized below. Class I recommendations are made for conditions for which there is evidence, general agreement, or both that a given procedure or treatment is useful and effective. At the other end of the spectrum, Class III recommendations are rendered for conditions for which there is evidence, general agreement, or both that the procedure/treatment is not useful/effective and in some cases may be harmful. Class II recommendations are given for conditions where there is conflicting evidence, a divergence of opinion, or both about the usefulness/efficacy of a procedure or treatment and is often further broken down into a and b subcategories. The AHA system also applies a stratification of the evidence that ranges from A to C where A is data derived from multiple RCTs, B is data derived from a single RCT or non-randomized study and C is consensus opinion of experts.
There are a several examples of recent RCTs within the NI space that would suggest limited benefit to our core treatments including treatment of arteriovenous malformations, vertebral augmentation and stroke.27–29
These have been discussed in a number of articles.30 The recent IMS 3 trial is a representative example of an AHA Class I prospective international randomized trial which suggested that endovascular therapy does not help patients with stroke beyond the benefits of intravenous tissue plasminogen activator. The implications of this study are broad-reaching on the surface and, taken to their logical extreme, suggest that stroke patients do not benefit from endovascular treatment. However, upon more rigorous scrutiny, there are a number of limitations within the trial that weaken the conclusions one might draw. To discuss a few: endovascular therapy targets large vessel occlusions; however, there was no mandatory vessel imaging in the triage process and 26% of patients who underwent angiography did not have a large vessel occlusion. The prespecified subgroup analysis of patients with documented large vessel occlusions did show a statistically significant improved functional outcome in those patients that underwent endovascular therapy. The trial took over 6 years to recruit patients with a significant technological evolution occurring during this time. At the conclusion of the trial, mechanical stent retrievers or larger bore aspiration catheters were considered the standard of care; however, these devices combined represented approximately 18% of the cases treated in the trial. The majority of cases (80%) were treated with intra-arterial thrombolysis or MERCI, which are rarely used in today's NI practice.
Shaneyfelt et al31 described the frequent failure of clinicians to implement clinical interventions that have been shown to be efficacious.32 ,33 The SNIS has responded with educational efforts at national meetings and numerous collaborative peer-reviewed publications in subspecialty journals on evidence-based standards and guidelines.
Conclusion
Where does that leave SNIS members and readers of JNIS? First, we must acknowledge that shifting toward evidence-based clinical practice is not as easy as it first sounds. EBM relies equally on the integrative skills of the individual clinician and on systematically organized analysis and synthesis provided by the review process itself.
NI specialists must recognize that evidence is variable in quality and quantity and must be related to the circumstance(s) of the individual patient. Put differently, the meaning of any body of evidence differs for physicians, administrators, payers and patients. Being able to interpret both the validity of evidence and its relative value is essential to determining meaningful policy. It is in that context that the SNIS ushers in a new era of evidence-based clinical practice.
References
Footnotes
-
Correction notice This article has been corrected since it was published Online First. The author name David A Fiorella has been amended to read David J Fiorella.
-
Contributors JAH and LM did the original research and provided a first draft. All authors reviewed the draft, provided commentary and editorial suggestions.
-
Competing interests None.
-
Provenance and peer review Not commissioned; internally peer reviewed.