Reflections on “Helping practitioners understand the contribution of qualitative research to evidence-based practice”
Although I congratulate Newman et al for bravely taking up the challenge that wading through the mire of qualitatively derived evidence entails, I find that their ultimate argument leaves me with more confusion than clarity. I fully agree with many of the excellent points they raise, but I would certainly take issue with others. However, reflecting on the thesis of their argument, I realise that the important conversation is not to argue the specific claims they make about qualitative research, its application, or its evaluation, but rather to examine the reasons that they are explaining these in the first place. Sometimes the greatest service a thoughtful paper can provide is sufficient discomfort to provoke further critical thinking. In that light, I hope that my response is understood as a beginning dialogue toward finding the clarity that we all aspire to within this complex, but ultimately fascinating, question.
Newman et al have usefully articulated many of the current confusions and contradictions within the existing literature on what constitutes qualitative research, the criteria against which its quality can be determined, and the context within which its products can be reasonably taken up to inform clinical practice. They alert us to the taxonomy of methodological approaches that appear in our nursing literature (and the interdisciplinary literature upon which we draw) with regard to distinctions among and between these approaches and the manner in which they are actually applied. Quite rightly, they note that there are often far fewer distinctions between methods claiming to draw upon distinct approaches than would be anticipated. However, I would take issue with their conclusion that these methods are not all that dissimilar from one another after all. From my perspective, the problem is that none of these conventional qualitative methods were developed for quite the purposes that nurses tend to have in mind when they set out to answer a qualitatively oriented clinical question. Consequently, nurses tend to begin with the methodological approach most favoured by their supervisors, academic communities, the granting agencies from which they seek support, or with the one that best appeals to their imaginations; they then modify various design stipulations and analytic strategies such that the products will “speak” to a clinical nursing audience. What often distinguishes qualitative research products then is not so much the named approach, but rather the extent to which the researcher has effectively navigated the ontological underpinnings of the research question (is there an underlying “truth” that I believe I am uncovering or is knowledge a moving target located within time and context amenable only to description?) and come to a clear understanding of his or her own epistemological positioning (if there is a useful ”truth” to be produced from my study, how will its derivation relate to what can be subjectively or objectively discerned?). Because, in my opinion, many qualitative nurse researchers have been somewhat ad hoc in the extent to which they take design direction from the methodological approaches they claim to be using, I have advocated that they might reduce confusion and aspire to better clarity by avoiding the use of “named” methods from which they draw only partially. For that reason, I have articulated the value of a non-categorical interpretive description,1,2 somewhat akin to what Sandelowski has labelled qualitative description,3 as an approach that permits one to tap the insights of the original methodologies for direction without sacrificing the design logic that derives from the nursing orientation to the research question and the clinical nature of the phenomenon being studied.
From my perspective, the issue of application in the context of evidence-based practice is, therefore, not simply one of evaluating research rigour. Having had the opportunity to participate in many large qualitative metasynthesis projects (ie, projects that attempt to synthesise generalisable knowledge from a large body of published qualitative research within a particular substantive field), it becomes apparent that rigour is so inherently tied to the methodological direction that it can become irrelevant to the quality of knowledge associated with the eventual product. For example, a methodologically tight study can create “bloodless findings” (the “so what?” factor), whereas a study that has departed significantly from its methodological underpinnings may have illuminated a critically important, and hitherto unarticulated, clinical insight (the “ahah!”). What rigour tells me is the logic trail by which the findings were obtained and the nature of the claims that the author can therefore legitimately report. Where qualitative researchers so often miss the mark is in maintaining integrity to the ontological and epistemological grounding within which their study has been situated. For example, although elucidating the subjective perspective of a small sample of disgruntled patients can be a useful exercise, it cannot provide us with a justification for recommendations about service delivery policy for large populations. The researcher’s ability to own up to the level and nature of the knowledge he or she has produced for eventual application becomes the most critical element in determining the manner in which findings will (or will not) have practice application. Thus, from my perspective, the strategy of encouraging clinicians to become well informed about an increasingly complex lexicon of qualitative research approaches so that they can evaluate rigour and quality does not begin to solve the problem of how and when qualitative findings ought to be taken up in practice.
If qualitative research is to be understood, as Newman et al suggest, as “a heuristic device for deepening and broadening understanding of practice,” it should never be presumed the exclusive source of such insight, but merely one additional element in what one would hope is a complex understanding informed by empiricism, ethics, clinical knowledge, pattern recognition, and countless other paths toward shared wisdom. The contribution that a brilliant qualitative study can make to that complexity of perspectives is the new angle of vision, the alternative perspective that sheds light on elements obscured by adherence to correlations or commonalities, which may help us align the other pieces of the evidence puzzle in such a manner that we may “see” things somewhat differently. Just as I believe to be the case with quantitative inquiry, the products of individual qualitative studies should never be taken out of that larger context as evidentiary on their own. Rather, they will contribute dramatically to the “evidence basis” with which we inform our practice when they clarify, elaborate, and unpack what we think we already know and render an alternative perspective from which to understand it differently, more deeply, or more fully.
Thus, in the final analysis, I find that Newman et al may have further confused the dialogue by asking the wrong questions—seeking “fit” between the evidence-based practice agenda and the existing products of qualitative research rather than seeking “insight” into the manner in which qualitative findings may inform that interpretation of available evidence. Although they clearly recognise this as “a crucial part of the evidence-based practice process,” the expansive model of evidence-based practice they advocate clearly orients us toward qualities inherent in the studies themselves rather than in the nature of the knowledge field for which the studies may or may not make an informative contribution.
From my perspective, the agenda of the qualitative research community to be “counted” within the evidence-based practice movement is entirely misdirected if it seeks to educate a clinical and policy audience with regard to the obscurities of rigour within an increasingly complex methodological terrain. Rather than advancing the claims of qualitative research as essentially different or as inherently more humanistic knowledge forms, I think it behoves us to focus our attention on what it is that we want to know in a clinical field, what objectives we wish to achieve, and what aspects of the field may not have been fully illuminated in the attempt to develop the evidence base necessary to feel confident about our practice directions. I challenge the suggestion that clinicians ought to be taught to discern the quality criteria by which individual pieces of qualitative inquiry might be judged, and then to try to work out how their findings ought to be applied in practice. In many instances, such findings may tell us more about the theoretical and ideological orientation of the researcher than they do about the clinical phenomena themselves. Instead, I see the qualitative nursing research community, if it takes up the evidence-based challenge in earnest, as a marvellous source of collateral, confirmatory, or even competing evidence that contributes by shedding new angles of vision upon that which we think we understand through the other mechanisms by which we counter the human fallibility associated with our ready capacity to assume we understand phenomena. The evidence-based practice movement was intended to counter the capacity within professional health care to see what it wanted to see and adhere to the insights derived from individual clinical wisdom.4 If qualitative studies—especially those with the appearance of methodological rigour—perpetuate some of those embedded assumptions, they will have further complicated the cause. I would therefore challenge us to shift the dialogue away from methodological purity and attention to rigour and toward understanding the qualitative enterprise as having the capacity, should it take up the challenge, to offer a fresh perspective that illuminates the marvellous complexities of the phenomena about which nursing is concerned.