Statistics from Altmetric.com
Double blind is the term researchers frequently use, and readers frequently accept, as a key marker of validity of a randomised controlled trial (RCT). Clinical trial experts and clinicians, when asked, all claim to “know” what double blind means; unfortunately it means diverse things to those questioned.1 The term lacks consistency in its use and interpretation — a critical flaw for any technical term if it is to be understood. In this editorial, we advocate abandoning the current blinding lexicon (ie, single, double, and triple blinding) and recommend transparent reporting of the blinding status of each group involved in the execution, monitoring, and reporting of clinical trials.
Blinding (or masking) in RCTs is the process of withholding information about treatment allocation from those who could potentially be influenced by this information. Blinding has long been considered an important safeguard against bias. Benjamin Franklin, in 1784, was probably the first to use blinding in scientific experimentation.2 Louis XVI commissioned Franklin to evaluate mesmerism, the most popular unconventional “healing fluid” of the eighteenth century.2 By applying a blindfold to participants, Franklin removed their knowledge of when mesmerism was and was not being applied. Blinding eliminated the intervention's effects and established mesmerism as a sham.2 From this work, the scientific community recognised the power of blinding to enhance objectivity and it quickly became, and remains, a commonly used strategy in scientific research.
The groups who can potentially introduce bias into an RCT through knowledge of the treatment allocation are shown in the table⇓.
Individuals in the 7 groups in the table⇓ are likely to have or to develop opinions about the efficacy of the intervention being investigated. Because of these opinions, unblinded individuals can systematically bias trial findings through conscious or unconscious mechanisms. When unblinded, participants may introduce bias through use of other effective interventions, differential reporting of symptoms,3 psychological or biological effects of receiving a placebo (although recent studies show conflicting evidence),4,5 or dropping out. Unblinded healthcare providers can distort trial results if they differentially administer effective co-interventions, influence compliance with follow up, or influence patient reports.3 Unblinded data collectors can introduce bias through differential encouragement during performance testing, differential timing or frequency of outcome measurements, and variable recording of outcomes.6,7 Unblinded judicial assessors may introduce bias in their assessments of outcomes, this being most likely during assessment of subjective outcomes.3 Unblinded data analysts have the potential to introduce systematic bias through decisions on patient withdrawals, post hoc selection of outcomes or analytic approaches, selection of time points that show the maximum or minimum effects, and many other decisions.8 Unblinded members of the data safety and monitoring committee may introduce bias at the time of interim analyses through their recommendations to stop or continue a study.8 Blinding of authors, although seldom done,8,9 may reduce the biases in the presentation and interpretation of results.
Case reports document individual examples of the biases described above.7,8,10,11 However, no high quality methodological studies have been done to evaluate whether blinding of individual groups systematically affects the estimate of effect in RCTs. Investigators have published 2 high quality methodological studies (ie, studies that assessed RCTs from meta-analyses, thereby controlling for the confounders of disease state and intervention), but they assessed the influence of investigators' statements that the trials were double blinded on the estimate of effect.12,13 Although 1 study showed lower estimates of effect in RCTs reported as double blinded,12 the other study did not find any association between the reporting of double blinding and the estimate of effect.13 Who was actually blinded in these studies probably varied and is certainly open to question. Heterogeneity in who was blinded in the studies reported as double blinded may be responsible for these discrepant findings.
Although the true magnitude of bias introduced by unblinding remains (and is likely to remain) uncertain, clinicians should consider the blinding status of each group in assessing study validity. Unfortunately, the suboptimal reporting of blinding status in full text publications and secondary journals has hindered readers.14,15 Authors have commonly relied on conventional blinding terminology (single, double, and triple blinding) to convey blinding status.1 We have shown great variability in physician interpretations and textbook definitions of these terms.1 It is for this reason that we recommend, and the editors of Evidence-Based Nursing have adopted, a strategy of abandoning the current blinding terminology for transparent reporting of the blinding status of the groups listed in the table⇓. As a result of this policy, readers will be able to make more informed decisions about the validity of the studies on which they base their practice.
This editorial appears in ACP Journal Club, Evidence-Based Mental Health, and Evidence-Based Medicine.