Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review

Health Technol Assess. 2010 May;14(25):iii-iv, ix-xii, 1-107. doi: 10.3310/hta14250.

Abstract

Background: Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes.

Objectives: The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community.

Data sources: A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors.

Review methods: All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators.

Results: There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems.

Limitations: Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken.

Conclusions: Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

Publication types

  • Review

MeSH terms

  • Data Interpretation, Statistical
  • Decision Support Techniques*
  • Evidence-Based Medicine / methods
  • Health Policy*
  • Humans
  • Policy Making
  • Qualitative Research
  • Reproducibility of Results
  • Research Design / standards*
  • Technology Assessment, Biomedical / methods*