Evid Based Nurs 11:6-8 doi:10.1136/ebn.11.1.6
  • EBN notebook

Putting evidence into context: some advice for guideline writers

  1. Jonathan Dartnell1,
  2. Mary Hemming1,
  3. Joe Collier2,
  4. Guenter Ollenschlaeger3
  1. 1
    Therapeutic Guidelines Limited, Melbourne, Victoria, Australia
  2. 2
    St George’s, University of London, London, UK
  3. 3
    Agency for Quality in Medicine, Berlin, and Medical Faculty, University of Cologne, Cologne, Germany

      Evidence-based practice (EBP) has been defined as the “integration of best research evidence with clinical expertise and patient values.”1 Though a laudable ideal, it is not feasible for individual clinicians to review, interpret, and apply all relevant evidence all of the time. Hence, clinicians should have access to “a set of tools and resources for finding and applying current best evidence from research for the care of individual patients.”2 The goal of clinical guideline writers is to perform part of this role, but they also must resolve these practical challenges if they are to provide tools to help clinicians deliver real evidence-based practice.34 Here we consider some challenges for guideline writers when producing clinical advice that meets the demands of busy clinicians.


      Knowledge to support decision making may be derived from published research, locally generated data, clinician experience, the law, and patient perspectives. Each can be regarded as “supporting evidence,” and so sometimes confusion arises in discussions about evidence and EBP. For treatment evaluation, quantitative research evidence (in particular from randomised clinical trials [RCTs] and systematic reviews of RCTs) generally has primary importance over other forms of evidence.5 But evidence from well conducted studies alone rarely provides answers to all questions in a particular clinical situation. Hence, giving best advice requires us to extrapolate and integrate the evidence to meet the demands of everyday clinical practice. This process requires interpretation and judgment.

      To be of value to a clinician, trials must be up-to-date and valid and have used clinically relevant doses, patients, comparators, end points, and durations.6 Interpreters of these trials must reconcile conflicting results and take into account publication bias, reviewer bias, and relevance to current practice.79 They must check that clinically important details are not hidden, overlooked, or “averaged out” by the methods of the study.10 The averaging-out implicit in statistical analysis diminishes the applicability to any one individual and oversimplifies the choices to be made. Showing that one treatment is better than another on average does not mean it is the best treatment for every individual.10

      Interpreters must also reconcile the mismatch between the narrowly defined group of patients in a trial and the patients in the clinical environment, and they must consider local issues such as costs, services, and laws and cultures. In individual cases, a therapy may not be appropriate because of coexisting diseases, contraindicated comedications, risk factors, health status, patient preferences, or patient circumstances.

      The availability of evidence may, of itself, distort practice. EBP is highly dependent on the generators of research evidence, whose goals are not generally consistent with the users of research evidence. Trials are costly and are mostly undertaken by the pharmaceutical industry, and so health care may become captured by the pharmaceutical industry. More specifically, published clinical trials tend to represent a biased (positive) sample of the total data pool. Trials with significantly positive results are more likely to be published—and be published earlier.11 Caution is required in interpretation of truncated RCTs: trials that are stopped earlier than planned because of an apparent benefit often overestimate the benefit and underestimate the harm of interventions.12 Results from trials funded by pharmaceutical companies tend to be “more positive” than those funded by “independent” sources.13

      The methods for guidelines, explicit or implicit, determine which evidence is used, how it is used, and the effect it may have on recommendations. Some guidelines may only make recommendations based on data from RCTs, whereas other guidelines may make recommendations based on best available evidence and expert group agreement.


      We believe that guideline writers must address 6 issues if they wish their product to be used to improve clinical practice.

      1. Use the best available evidence

      To answer the full spectrum of clinical questions, different sorts of evidence must be used. Guideline writers can only use the best available evidence and should not be unduly constrained if it is not thought to be the best possible evidence.

      The EBP movement has been criticised for its acceptance of only a narrow range of research methodologies that are unable to assess the effectiveness of interventions more complex than simple treatments (ie, drug therapy).15 There are other sources and types of important and clinically useful evidence besides RCTs. Different types of questions require different types of evidence, and there are indications and contraindications for different types of research evidence.14 For example, although case reports are a less than perfect source of evidence, they can signal potential or rare harms or benefits of an effective treatment.14

      Caution is required in the application and use of ratings of levels of evidence.15 Poorly designed hierarchies can lead to anomalous rankings. For example, a few small, poor-quality RCTs might result in a level 1 ranking, but a single, large, well-conducted, multisite RCT would only merit a level 2 ranking.14 Hence, level alone should not be used to grade evidence but rather should be the starting point for a more thorough appraisal that includes the quality and size of the study, and the size and relevance of the effects.15

      2. Harness a diversity of expertise and opinion

      The translation of evidence into recommendations is not straightforward; evidence can be interpreted in different ways dependent on mindsets and experiences.16 Even when there is very good evidence, different experts may synthesise it to produce various conclusions about optimal therapy.17

      As factors other than the evidence affect group decisions,16 it is essential that the process used to distil evidence and produce recommendations has features that minimise the influence of undesirable biases. The organisation responsible for, and funding the production of, such information should avoid conflicts of interest such as pharmaceutical industry sponsorship.18 The members of the writing groups should be independent and respected practitioners with expertise in a broad range of clinical domains from different disciplines and include representatives from different clinical perspectives, such as urban and rural settings. In addition, clinicians need to understand the context to which the research is applied. Without such experiential knowledge, the evidence may not be appropriately contextualised nor implemented.

      The process of content development should be iterative, responding to changes in evidence as well as feedback from users, external reviewers, and opinion leaders.

      3. Allow users to fit advice to the individual patient and context

      Dealing with multiple disparate pieces of information is complex and error prone. Clinical judgment and experience are essential in gauging how much weight should be attributed to each factor. Hence, published guidance needs to be flexible and adaptable to various clinical contexts. Guidelines should provide information that can be integrated with individual clinical expertise so that clinicians can make decisions about whether and how it matches the patient’s clinical state, predicament, and preferences, and thus whether it should be applied.19

      4. Balance simplicity, flexibility, and completeness

      The most helpful clinical information is problem-oriented. While based on the best available evidence, it focuses on the problems seen in day-to-day practice. Usability of information and its potential integration into the clinical workflow is critical. To meet these criteria, guidelines should be readable, easy to use, and available in different forms for different groups of users (eg, handbooks, electronic versions for desktop computers, and handheld computers). Furthermore, they should be easily and quickly navigable by readers.20

      High usability in the clinical environment requires relative simplicity, which may be compromised if attempts are made to include details of the totality and complexity of evidence that goes into creating a guideline. Total clarification and documentation of the evidence behind the myriad pieces of advice is probably not possible and would be beyond the capacity of most writing groups.17

      Alternatively, most users are probably willing to accept the advice provided by a trusted source in the knowledge that if they want to see the existing evidence, they have relatively easy access to it.17

      5. Build trust through quality, transparency, and independence

      The response of clinicians to information is influenced by the trust clinicians have in that information. Factors that influence trust in the information are the integrity and reputation of the organisation publishing the information. Information from government sources could be perceived as promoting the government agenda; information from the pharmaceutical industry could be thought to accentuate the benefits, or minimise the adverse qualities, of a particular drug therapy. Sometimes the sources are unclear. However, it is easier to trust organisations that are transparent and make clear their publishing goals, the process used to produce the information, their funding, and article authorship.18 20

      The process by which the guidance is developed is critical to its level of trust. More reliance would be given to information developed by a panel of independent experts who followed a rigorous, exhaustive, well-documented, and tested process. The process needs to be judged on its capacity to distil and contextualise relevant evidence, as well as to provide sound guidance for situations where there is little or no evidence. The process needs to be insulated from possible vested interests.

      To gain trust, there must be clear lines of accountability, and in the event of error, a preparedness to announce and correct the mistake. Patient welfare should take precedence over publication survival.

      6. Give specific, but not constraining, guidance

      The effect on health care of dissemination and implementation of guidelines is variable.21 Studies assessing the effectiveness of different strategies for implementation of guidelines (eg, reminders, audit and feedback, opinion leaders, and educational outreach) have shown modest effects and no clear pattern of results. This is not surprising given that the impact is likely to depend on a vast array of factors relating to guideline credibility, availability, accessibility, complexity, and applicability; and clinician awareness, memory, acceptance, and trust as well as opportunity, motivation, and capacity to use the guideline appropriately. However, a critical factor often not addressed is the explicitness of guideline recommendations. If guidelines are to change behaviour, they need to be clear about what clinicians are to do.22 Specific recommendations for action increase understanding of what needs to be done, improve recall of what should be done when, and increase capacity to plan and enact recommended practices.22 For example, the Therapeutic Guidelines (eg, eTG complete23) usually provide several options for treatment, but for each option, specific generic names, doses, and durations are given. The options may be listed as either equivalent or a preferential order may be indicated. For non-drug treatments, such essential details are likely to be more complex.

      Even though guidelines may not consistently make a direct or clearly measurable impact on patient care, this should not suggest that guidelines are not valuable. Clinicians need references against which they can measure (audit) their own practice. Usable guidelines that can provide a “second opinion” for the many uncertainties that exist in day-to-day practice are valuable.


      It is not possible for individual clinicians to “become responsible for integration of best research evidence with clinical expertise and patient values” without tools that can provide information that is problem-based and focuses on the problems seen in day-to-day practice. Neither clinical trials nor meta-analyses can themselves provide the practical clues needed to implement their findings. And data provided by government or the pharmaceutical industry may not necessarily be trusted. We believe that well-produced, independent clinical practice guidelines that are trusted and provide integrated evidence in a clinical context offer clinicians the best chance of getting reliable support for decision making in clinical practice.


      the authors thank Paul Glasziou and Sharon Straus for helpful comments on earlier drafts.


      • This notebook also appears in Evidence-Based Medicine.


      No Related Web Pages

      Free Sample

      This recent issue is free to all users to allow everyone the opportunity to see the full scope and typical content of EBN.
      View free sample issue >>

      EBN Journal Chat

      The EBN Journal Chat offers readers the opportunity to participate in discussion about research articles and commentaries from Evidence Based Nursing (EBN).

      How to participate >>

      Don't forget to sign up for content alerts so you keep up to date with all the articles as they are published.