Evidence-based medicine and evidence-based research

A revolution in clinical practice has taken place since the concept of evidence-based medicine (later broadened and called evidence-based practice or evidence-based healthcare) was first introduced in the early 1990s1 (see also: https://www.bmj.com/content/suppl/2007/01/18/334.suppl_1.DC3). The idea was to formulate recommendations in clinical guidelines informed by a synthesis of research about a particular clinical question in a systematic review. Scientists have strongly promoted evidence-based medicine, but increasing evidence indicates that, as researchers, we need to put our own house in order. If asked, any scientist would agree that new projects should build on previous research results within the same field. This is codified a number of places, including reporting guidelines such as the CONSORT (Consolidated Standards of Reporting Trials) statement.2 However, several studies clearly show that researchers do not explicitly consider prior research. Researchers need to adopt and promote an Evidence-Based Research (EBR) approach, that is, no new studies without a systematic review of the existing literature.

Meta-research

Research on research, also called meta-research aims “to study research itself and its practices. The objective is to understand and improve how we perform, communicate, verify, evaluate, and reward research.”3 Several meta-research studies4-10 have evaluated whether researchers use a systematic and transparent approach when planning new research and when interpreting new results in the context of existing evidence—the results of these studies clearly indicate that many researchers do not do that.

Some of these meta-research studies suggest that up to 50% (and maybe even more) of studies are unnecessary and wasteful. For example, Jia et al5 showed that even though 2 separate Chinese clinical guidelines (prepared following the guidelines developed by the American College of Cardiology / American Heart Association and the European Society of Cardiology) published in 2007 recommended the use of statins in patients with coronary artery disease, more than 2045 original randomized studies addressing the same clinical question were performed and published between 2008 and 2019. More alarmingly, the same study indicated that 101 486 patients were included in the control groups not treated with statins and 559 (95% CI, 506–612) of these participants died, 973 (95% CI, 897–1052) experienced a new or recurrent myocardial infarction, and 161 (95% CI, 132–190) had a stroke. Most of these major adverse cardiac events could have been prevented as the clinical guidelines clearly recommended treatment with statins for these patients.

Thus, issues associated with redundant research have a very real clinical impact. A problem that has been identified is that researchers rarely cite all earlier similar studies. Robinson and Goodman7 performed an extensive analysis in which they identified 1523 original studies that could have cited at least 3 similar trials conducted earlier. They evaluated the included original studies in systematic reviews of the same question to identify those which could have cited all earlier studies published at least 1 year before. Their analyses showed that, even though the authors could have cited at least 3 and maybe many more previous trials (508 studies could have cited 10 or more), 55% did not cite a single prior study, and the median number cited was 2, regardless of how many studies could have been referenced. This is rather surprising, as most researchers would accept that research is a cumulative enterprise, meaning any new study always builds upon the existing knowledge. But something else is going on. Several studies clearly indicate that the decision of which prior research to cite is not based upon a systematic and transparent approach but rather influenced by personal preferences and strategic considerations. For example, an interview of 87 scientists about their reason for selecting a specific citation in a recent publication showed that none were citing because they had systematically searched for all earlier similar studies.11 Instead, they explained that a specific reference was selected either because the author was known to them (24%), the study was regarded as a seminal work in the field (15%), they knew the journal or conference (10%), or the institution or research group (8%), it had used a sound method (4%), or the researchers had written the study themselves (4%). Several meta-research studies have found that positive, supportive, and statistically significant studies are cited more often than the negative, critical, and nonsignificant ones.12-15 In other words, authors of scientific publications do refer to earlier research, but this is very rarely done systematically or transparently.

When researchers justify their own new study, the lack of systematicity and transparency could lead to redundant trials and possibly to exclusion of patients from essential treatment and the use of significant resources for no good reason. In a recent study, Engelking et al8 showed that only 20% of studies published within the field of anesthesiology used a systematic review to justify their new study. By using a systematic review, researchers embarking on a new study can demonstrate that all available earlier similar studies have been identified, and the selection of references in the justification of the study is not biased but systematic and transparent. Another study, by de Meulemeester et al,16 outlined 3 scientific criteria that were important for the ethical justification of a randomized clinical trial. Firstly, the researchers should design their study around a clear hypothesis. Secondly, for the study to be justified there should exist some uncertainty around the hypothesis to be tested. Lastly, this uncertainty should be established through a systematic review. They concluded that 56% of the evaluated studies did not meet these 3 criteria and had therefore failed to be scientifically and ethically justified.

A further benefit of using a systematic review when justifying a new study is that the information it provides can also inform the design of the new study. However, even in the rare cases where a systematic review is mandatory for justifying the study, very few researchers seem to use it to inform their design choices. Since 2006, the National Institute of Health Research in the United Kingdom has required a systematic review to be part of the justification for a new funding proposal. A study from 2015 showed that by 2013 all proposals had indeed adhered to this requirement and referred to a systematic review in their proposal.9 Although more than 90% used a systematic review to guide their selection of treatment comparison, less than 10% used it to inform elements such as choosing the frequency or dose, estimating the control event rate, informing the standard deviation, or selecting the intensity of treatment.

The use of a systematic and transparent approach to the selection of references to be cited in a publication is also important at the end of the study. When a new study is finished and its results are interpreted and discussed, this should of course happen within the context of the existing evidence. However, in a series of studies conducted between 1997 and 2013, Chalmers, Clarke, and Hopewell performed 5 repeated evaluations of randomized clinical trials (RCTs) published in 5 medical journals with the highest impact factor (Journal of the American Medical Association, The Lancet, The BMJ, New England Journal of Medicine, and Annals of Internal Medicine).10 In the month of May in 1997, 2001, 2005, 2009, and 2012, they evaluated all published RCTs to answer the following question (among others): Did the RCT contain an updated systematic review integrating the new results? In total, they found 141 RCTs over these 5 months of May, but only 5 (3.4%) fulfilled this criterion. Furthermore, there was no indication of any improvement over time.

The concept of evidence-based research

The meta-research studies mentioned above (and many more) raise awareness of a fundamental problem among researchers. Sir Iain Chalmers was one of the key voices highlighting this problem in several papers over the years. Already in 1992, together with his colleagues Kay Dickersin and Thomas C. Chalmers, he argued that “if systematic reviews, updated periodically, had been started at the beginning of a series of related trials, reliable recommendations for treatment would have been made earlier”.17 They referred to the study by Lau et al4 published in the same year which clearly showed that even though 8 studies evaluating the use of intravenous streptokinase as thrombolytic therapy for acute myocardial infarction had reported a consistent, statistically significant reduction in total mortality, 25 subsequent trials had been performed, with no effect on the results except to narrow the confidence interval. These 25 redundant studies enrolled 34 542 patients, leading to the conclusion that up to 17 271 individuals in the control groups may have been denied an effective treatment. Cynthia Mulrow emphasized the same point in a rationale for systematic reviews in The BMJ in 1994, when she stated that researchers should use a systematic review to “identify, justify, and refine hypotheses; recognize and avoid pitfalls of previous work; estimate sample sizes; and delineate important ancillary or adverse effects and covariates that warrant consideration in future studies.”18

In 2005, when Fergusson et al6 published a meta-research study evaluating studies on the use of aprotinin in cardiac surgery published over time, sir Iain Chalmers wrote an accompanying commentary stating: “The article by Dean Fergusson and his colleagues in this issue of the journal (Clinical Trials) is the most recent evidence of an ongoing scandal in which research funders, academia, researchers, research ethics committees and scientific journals are all complicit. New research should not be designed or implemented without first assessing systematically what is known from existing research. The failure to conduct that assessment represents a lack of scientific self-discipline that results in an inexcusable waste of public resources.”6 Fergusson et al6 showed that more than 2000 patients had been included in the control groups of unnecessary studies after it was already known that aprotinin could reduce the need for perioperative transfusion.

Over the next years, more papers were published highlighting the same problem.19-23 In 2009, Karen Robinson published her PhD thesis in which she stated (page 123) that “While the use of research synthesis to make evidence-informed decisions is now expected in health care, there is also a need for clinical trials to be conducted in a way that is evidence-based. Evidence-based research is one way to reduce waste in the production and reporting of trials, through the initiation of trials that are needed to address outstanding questions and through the design of new trials in a way that maximizes the information gained. Investigators need to identify and consider prior studies to provide the ethical and scientific justification for why they started a clinical trial, and to determine the most appropriate design and methodological characteristics of that trial.” Finally, both the problem and a potential solution were clearly laid out; namely, just as healthcare practitioners should use an evidence-based medicine approach to justify their decisions, so should researchers adopt an evidence-based research approach when justifying a new study.

The evidence-based research network

In 2014, Karen Robinson and Hans Lund assembled a group of concerned researchers from Canada, the United States, the United Kingdom, Norway, Denmark, and Australia and established the Evidence-Based Research Network (EBRNetwork).24 It defines EBR as the use of prior research in a systematic and transparent way to inform a new study, so that it is answering questions that matter in a valid, efficient, and accessible manner.25 Many meta-research studies clearly indicate a fundamental problem in the way researchers are justifying and designing new studies and how they are interpreting new results in the context of the existing evidence. Hence, the EBRNetwork prepared a series of studies to highlight a possible solution and applied for funding from the European Union to raise awareness and create a strong and lasting network. A paper was also published to underline the problem and solution and to identify expectations from key stakeholders.26 In 2018, the European Cooperation of Science and Technology (COST) Action “EVBRES” (another abbreviation of EBR) was established (more details below), with all 38 COST countries involved. This represented a wide recognition of the problem and the need to solve it. Later on, in 2020, the EBRNetwork published a series of articles explaining the concept of EBR in more detail.25,27,28

The COST Action “EVBRES”

Redundant clinical research has been published due to the absent use of systematic reviews when a new study is planned. It is unethical, limits the available funding for important and relevant research, and diminishes the public’s trust in research. In order to raise awareness of this inappropriate practice, the EVBRES consortium defines EBR as the use of prior research in a systematic and transparent way to inform a new study so that it answers the questions that matter in a valid, efficient and accessible manner. New studies should be informed by systematic reviews as to the most appropriate design and methods. EVBRES helps establish an international, European-based network aiming to raise awareness of the need to use systematic reviews when planning new studies and placing new results in context.

For further information about EVBRES, see: https://evbres.eu.

Everyone interested is very welcome to join the network (https://evbres.eu/contact-us/).

The Evidence-Based Research approach

Whenever a new study is under preparation, the elements to be considered in its justification include, among others, the competencies of the researcher(s), the funding and equipment available, and the underpinning research results, for example, animal studies and / or earlier studies. The EBR approach suggests adding 2 additional elements (Figure 1).

Figure 1. The elements of an Evidence-Based Research (EBR) approach; reprinted from Robinson et al25 with permission, copyright by Elsevier (2021)

First, to demonstrate the need for the new study, researchers should look for possible research gaps or uncertainty by utilizing (or conducting) a systematic review covering the same question as the planned study. If this systematic review of earlier similar studies shows no knowledge gap or high certainty of the evidence, research should shift focus to an area with a higher need. However, if the systematic review clearly indicates low or very low certainty of the evidence or a research gap, there is a strong justification for yet another study. This way, the new study has been shown to be necessary.

However, as strongly advocated by Emanual et al,29 a new study should not only be necessary but also relevant and important for end users. In this context, the end users include all who are affected by the results of the new study and those who will be using it. Thus, they can differ considerably between different types of studies. For clinical studies such as randomized controlled trials, end users will typically include patients, next of kin, and clinicians. In line with the approach used when establishing the need for a new study, its relevance should also be evaluated by using a systematic and transparent approach to identify and assess the needs of end users. Therefore, we suggest that the researcher(s) responsible for a new study should identify a systematic review of qualitative studies and / or surveys including the end users. Such a systematic review of qualitative studies and / or surveys will report on the values, preferences, experiences, and perspectives related to the aim of the new study in a scientific way, and thus minimize biases. The number of systematic reviews of these types of studies is, of course, dwarfed by those evaluating the clinical effectiveness of an intervention; however, it has been increasing substantially in recent times.

The EBR approach is described in more detail in Figure 2. Starting out from an idea for a new study, researchers identify a systematic review of earlier similar studies to evaluate if the planned study is necessary. Furthermore, they also search for a systematic review on the perspectives of the end users to assess the relevance of the new study. If both systematic reviews indicate that the new study is necessary and relevant, the researchers have a very strong justification for taking it forward, and they can answer “yes” to the question “Is the research question justified?”.

Figure 2. The Evidence-Based Research (EBR) approach and outline; reprinted from Robinson et al25 with permission, copyright by Elsevier (2021)

Abbreviations: SR, systematic review

On the other hand, if the systematic reviews indicate no need and no relevance, the answer to the above question will be a “no,” and the researchers ought to revise the question in line with the findings or simply find a new research question to address.

A similar conclusion should be reached when the systematic reviews point in different directions. If the systematic review of earlier similar studies indicates no need for another study, but the systematic review of the perspectives of the end users indicates that such a study would be relevant, the researchers can conclude that the end users’ wish for such a study has been fulfilled by preparing a systematic review of earlier studies. Thus, the answer to the question regarding justification is negative. If there is a research gap or uncertainty, but the systematic review of the perspectives of the end users indicates that the question is irrelevant to them, the researchers should carefully consider the balance between the needs from the end users and the scientific necessity for this study. A study that addresses an evidence gap may not always be relevant for the end users.

Once it has been determined that the research question is justified, the researcher(s) can use the systematic reviews to inform the design of the new study.

After the researchers have carried out the study and are preparing its report for publication, they need to interpret the new results within the context of the existing evidence based on (an updated version, if required) the systematic review of earlier similar studies. This can be done by integrating the new results with the existing evidence and even, where possible, adding the new results in a meta-analysis of all earlier studies. This way, the authors will be encouraged to consider all the different results from earlier studies and not just cite the studies that fit “the story” that the authors want to tell. A more detailed description of the EBR approach has been described elsewhere.25,27,28

Perspectives

The EBR approach acknowledges that researchers have always referred to some earlier studies in their publications but at the same time, it raises the key issue that the justification and design of a new study have rarely been performed systematically and transparently (read: scientifically) by utilizing systematic reviews. Owing to the digital revolution, the scientific ideal of research being cumulative is within reach. Without implementing the EBR approach, researchers and health systems risk wasting money, precious time, and resources, and increase the risk of harm to patients from unnecessary participation in studies in which they might have to undergo outdated interventions or not receive a treatment that has already been proven effective. Therefore, the EBR approach is not just an academic principle, but a way of achieving research that is ethical, relevant, and worthwhile.