Intended for healthcare professionals

Education And Debate

Evaluating the teaching of evidence based medicine: conceptual framework

BMJ 2004; 329 doi: https://doi.org/10.1136/bmj.329.7473.1029 (Published 28 October 2004) Cite this as: BMJ 2004;329:1029
  1. Sharon E Straus, associate professor (sharon.straus{at}utoronto.ca)1,
  2. Michael L Green, associate professor2,
  3. Douglas S Bell, assistant professor3,
  4. Robert Badgett, associate professor4,
  5. Dave Davis, professor5,
  6. Martha Gerrity, associate professor6,
  7. Eduardo Ortiz, associate chief of staff7,
  8. Terrence M Shaneyfelt, assistant professor8,
  9. Chad Whelan, assistant professor9,
  10. Rajesh Mangrulkar, assistant professor10

    the Society of General Internal Medicine Evidence-Based Medicine Task Force

  1. 1 Department of medicine, Toronto General Hospital, 200 Elizabeth Street, 9ES-407, Toronto, Ontario M5G 2C4, Canada
  2. 2 Department of internal medicine, Yale University School of Medicine, New Haven, CT, USA
  3. 3 Department of medicine, David Geffen School of Medicine, UCLA, Los Angeles, CA, USA
  4. 4 Department of medicine, University of Texas Health Science Centre at San Antonio, TX, USA
  5. 5 Department of health policy, management and evaluation, University of Toronto, Toronto, Canada
  6. 6 Department of medicine, Oregon Health Sciences University, Portland OR, USA
  7. 7 Washington DC VA Medical Centre, Washington, DC, USA
  8. 8 Department of medicine, VA Medical Affairs, Birmingham, AL, USA
  9. 9 Department of medicine, University of Chicago, Chicago, IL, USA
  10. 10 Department of medicine, University of Michigan Medical School, Ann Arbor, Michigan, USA
  1. Correspondence to: S E Straus

    Although evidence for the effectiveness of evidence based medicine has accumulated, there is still little evidence on what are the most effective methods of teaching it.

    Introduction

    Interest in evidence based medicine (EBM) has grown exponentially, and professional organisations and training programmes have shifted their agenda from whether to teach EBM to how to teach it. However, there is little evidence about the effectiveness of different methods,1 and this may be related to the lack of a conceptual framework within which to structure evaluation strategies. In this article we propose a potential framework for evaluating methods of teaching EBM. Showing the effectiveness of such teaching methods relies both on psychometrically strong measurements and methodologically rigorous and appropriate study designs, and our framework addresses the former.

    This effort was initiated by the Society of General Internal Medicine Evidence-Based Medicine Task Force.2 In an attempt to tackle the challenges in designing and evaluating a series of teaching workshops on EBM for busy practising clinicians, the task force created a conceptual framework for evaluating teaching methods. This was done by a working group of clinicians interested in the subject. They completed a literature review of instruments used for evaluating teaching of EBM (manuscript in preparation), and two members of the task force used the information to draft a conceptual framework. This framework and relevant background materials were discussed and revised at a consensus conference including 10 physicians interested in EBM, evaluation of education methods, or programme development. We then sent a revised framework to all members of the task force and six other international colleagues interested in the subject. We incorporated their suggestions into the framework presented in this article.

    When formulating clinical questions, advocates of EBM suggest using the “PICO” approach—defining the patient, intervention, comparison intervention, and outcome.3 We used this approach to provide a framework for the evaluation matrix, specifically:

    • Who is the learner?

    • What is the intervention?

    • What is the outcome?

    The answers to these three questions form the structure of our conceptual model.

    Who is the learner?

    Learners can be doctors, patients, policy makers, or managers. This article focuses on doctors, but our evaluation framework could be applied to other audiences.


    Embedded Image

    Credit: PHILIP SIMPSON/PHOTONICA

    Not all doctors want or need to learn how to practise all five steps of EBM (asking, acquiring, appraising, applying, assessing).4 5 Indeed, most doctors consider themselves users of EBM, and surveys of clinicians show that only about 5% believe that learning all these five steps is the most appropriate way of moving from opinion based to evidence based medicine.4

    Doctors can incorporate evidence into their practice in three ways.3 6 In a clinical situation, the extent to which each step of EBM is performed depends on the nature of the encountered condition, time constraints, and level of expertise with each of the steps. For frequently encountered conditions (such as unstable angina) and with minimal time constraints, we operate in the “doing” mode, in which at least the first four steps are completed. For less common conditions (such as aspirin overdose) or for more rushed clinical situations, we eliminate the critical appraisal step and operate in the “using” mode, conserving our time by restricting our search to rigorously preappraised resources (such as Clinical Evidence). Finally, in the “replicating” mode we trust and directly follow the recommendations of respected EBM leaders (abandoning at least the search for evidence and its detailed appraisal). Doctors may practise in any of these modes at various times, but their activity will probably fall predominantly into one category.

    The various methods of teaching EBM must therefore address the needs of these different learners. One size cannot fit all. Similarly, if a formal evaluation of the educational activity is required, the evaluation method should reflect the different learners' goals. Although several questionnaires have been shown to be useful in assessing the knowledge and skills needed for EBM,7 8 we must remember that learners' knowledge and skills targeted by these tools may not be similar to our own. The careful identification of our learners (their needs and learning styles) forms the first dimension of the evaluation framework that we are proposing.

    What is the intervention?

    The five steps of practising EBM form the second dimension of our evaluation framework. But what is the appropriate dose and formulation? If our learners are interested in practising in the “using” mode, our teaching should focus on formulating questions, searching for evidence already appraised, and applying that evidence. Evaluation of the effectiveness of the teaching should exclusively assess these steps. In contrast, doctors interested in practising in the “doing” mode would receive training in all five steps of practising EBM, and the evaluation of the training should reflect this.

    Published evaluation studies of teaching EBM show the diversity of existing teaching methods. Some evaluation studies use an approach to clinical practice, whereas others use training in one of the skills of EBM such as searching Medline9 or critical appraisal.10 Indeed, one review of 18 reports of graduate medical education in EBM found that the courses most commonly focused on critical appraisal skills, in many cases to the exclusion of other necessary skills.11 Some studies have looked at 90 minute workshops whereas others included courses that were held over several weeks to months, thereby increasing the “dose” of teaching. Evaluation instruments should be tailored to the dose and delivery method, thereby assessing outcomes and behaviours that are congruent with the intended objectives.

    What are the outcomes?

    Effective teaching of EBM will produce a wide range of outcomes. Various levels of educational outcomes could be considered, including attitudes, knowledge, skills, behaviours, and clinical outcomes. The outcome level (the third dimension of the conceptual framework) reflects Miller's pyramid for evaluating clinical competence12 and builds on the competency grid for evidence based health care proposed by Greenhalgh.13 Changes in doctors' knowledge and skills are relatively easy to detect, and several instruments have been evaluated for this purpose.7 8 However, many of these instruments primarily evaluate critical appraisal skills, focusing on the role of “doer” rather than “user.” A Cochrane review of critical appraisal teaching found one study that met the authors' inclusion criteria and that the course studied increased knowledge of critical appraisal.10 With our proposed framework, evaluation of this teaching course falls into the learner domain of “doing,” the intervention domain of “appraisal,” and the outcome domain of “knowledge.”

    Changes in behaviours and clinical outcomes are more difficult to measure because they require assessment in the practice setting. For example, in a study evaluating a family medicine training programme, doctor-patient interactions were videotaped and analysed for EBM content.14 A recent before and after study has shown that a multi-component intervention including teaching EBM skills and providing electronic resources to consultants and house officers significantly improved their evidence based practice (Straus SE et al, unpublished data). With our proposed framework, evaluation of this latter teaching intervention would be categorised into the learner domain of “doing.” The intervention domains include all five steps of EBM, and the outcome domain would be “doctor behaviour.”

    Implementing the evaluation framework

    The EBM task force developed teaching workshops for practising doctors that focused on formulating questions and searching for and applying preappraised evidence. Because these workshops were unlike traditional workshops that focused on the five steps of practising EBM,15 we concluded that evaluation of these workshops must be different. We created an evaluation instrument to detect an effect on learners' EBM knowledge, attitudes, and skills.

    When we applied the evaluation framework to our evaluation instrument we found that our learners' goals were different from what we were assessing (table 1). We found that we placed greater emphasis on the skills necessary for practising in the “doing” mode than those required in the “using” mode, whereas the intervention was targeted to improve “user” behaviour. Moreover, the assessment mirrored traditional evaluation methods, focusing on appraisal skills, with little attention paid to question formulation. Finally, we saw that our evaluation predominantly measured skills rather than behaviour. This reflection led us to redesign our evaluation instrument to more closely reflect the learning objectives. We also attempted to show how the evaluation framework could be used—how to move from a concept to actual use (table 2).

    Table 1

    Application of evaluation framework to SGIM EBM Task Force evaluation tool

    View this table:
    Table 2

    Application of the conceptual framework for formulating clinical questions

    View this table:

    Limitations of this framework

    Our model requires that teachers work with learners to understand their goals, to identify in what mode of practice they want to enhance their expertise, and to determine their preferred learning style. This simple model could be expanded to include other dimensions, including the role of the teacher and the “dose” and “formulation” of what is taught. However, our primary goal was to develop a matrix that was easy to use. Although we have applied this framework to several of the published evaluation instruments and have found it to be useful, others may find that it does not meet all of their requirements.

    What's next?

    While EBM teachers struggle with developing innovative course materials and evaluation tools, we propose a coordinated sharing of these materials in order to minimise duplication of effort. Using the proposed framework as a categorisation scheme, the task force is establishing an online clearinghouse to serve as a repository for evaluations of methods of teaching EBM including details on their measurement properties.2 Teachers will be able to identify evaluation tools that might be useful in their own setting, using the framework to target their needs.

    There is still little evidence about the effectiveness of different teaching methods,1 and attempting to evaluate such teaching is challenging given the complexity of the learners, the interventions, and the outcomes. One way to help meet these challenges is to develop a collaborative research network to conduct multicentre, randomised trials of educational interventions. We invite interested colleagues to join us in developing this initiative and to create the clearinghouse for evaluation tools (www.sgim.org/ebm.cfm).

    Summary points

    There is little evidence about the effectiveness of different methods of teaching evidence based medicine

    Doctors can practise evidence based medicine in one of three modes—as a doer, a user, or a replicator

    Instruments for evaluating different methods of teaching evidence based medicine must reflect the different learners (their learning styles and needs), interventions (including the dose and formulation), and outcomes that can be assessed

    Our framework provides only one way to conceptualise the evaluation of teaching EBM; many others could be offered. We hope that our model serves as an initial step towards discussion and that others will offer their suggestions so that we may work together towards improved understanding of the evaluation process and promote more rigorous research on the evaluation of teaching EBM.

    Footnotes

    • Embedded Image Sample questions from the task force's summative evaluation tool appear on bmj.com

    • The members of the SGIM EBM Task Force included: Rob Golub, Northwestern University, Chicago, IL; Michael Green, Yale University, New Haven, CT; Robert Hayward, University of Alberta, Edmonton, AB; Rajesh Mangrulkar, University of Michigan, Ann Arbor, MI; Victor Montori, Mayo Clinic, Rochester, MN; Eduardo Ortiz, DC VA Health Centre, Washington, DC; Linda Pinsky, University of Washington, Seattle, WA; W Scott Richardson, Wright State University, Dayton OH; Sharon E Straus, University of Toronto, Toronto, ON. We thank Paul Glasziou for comments on earlier drafts of this article.

    • Funding SES is funded by a Career Scientist Award from the Ministry of Health and Long-term Care and by the Knowledge Translation Program, University of Toronto. DSB is funded in part by the Robert Wood Johnson Foundation Generalist Physician Faculty Scholars Program.

    • Competing interests None declared.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.