BY-NC-ND 3.0 license Open Access Published by De Gruyter October 17, 2015

Understanding diagnostic error: looking beyond diagnostic accuracy

Jane Heyhoe, Rebecca Lawton, Gerry Armitage, Mark Conner and Neil H. Ashurst
From the journal Diagnosis

Abstract

Whether a diagnosis is correct or incorrect is often used to determine diagnostic performance despite there being no valid measure of diagnostic accuracy. In this paper we draw on our experience of conducting research on diagnostic error and discuss some of the challenges that a focus on accuracy brings to this field of research. In particular, we discuss whether diagnostic accuracy can be captured and what diagnostic accuracy does and does not tell us about diagnostic judgement. We draw on these points to argue that a focus on diagnostic accuracy may limit progress in this field and suggest that research which tries to understand more about the factors that influence decision making during the diagnostic process may be more useful in helping to improve diagnostic performance.

Introduction

In the first issue of Diagnosis, Robert Wachter reflected on the lack of a valid measure of diagnostic accuracy and the role this has played in limiting the profile of diagnostic error in the current patient safety agenda [1]. Our experience of research in this field has highlighted the problems associated with a focus on accuracy. In this paper we consider what information measures of diagnostic accuracy can and cannot provide and highlight some of the difficulties in trying to capture diagnostic accuracy. We conclude by arguing that an increase in research which examines the factors that influence decision making during the diagnostic process may be a more helpful approach to understanding why diagnostic errors occur.

What does diagnostic accuracy tell us?

In its simplest terms, diagnostic accuracy is about whether a diagnosis has been assigned correctly. While few would argue that a correct diagnosis is often what most clinicians strive to do in order to ensure that patients receive the most effective treatment and the best outcome, there are limitations to the knowledge that a focus on diagnostic accuracy can provide. Measures of accuracy tell us whether the correct diagnosis was reached, if the correct diagnosis is known, and can also help to identify the types of symptoms, illnesses and co-morbidities that are associated with misdiagnosis and delayed diagnosis. What it does not do is provide any information about what influences a clinician’s thinking when assessing a patient’s signs and symptoms; it does not allow us to assess what role these influences play in formulating differential diagnoses or in any choices made for the subsequent actions of the clinician or the clinical management of the patient. Furthermore, it does not indicate what types of decisions within each diagnostic decision stage are associated with correct and incorrect diagnoses or time taken to reach the right diagnosis.

The difficulties of measuring diagnostic accuracy

While a focus on accuracy may restrict our learning about diagnostic error there are also obstacles to capturing an objective or meaningful measure of diagnostic accuracy. Here, we draw on our experience of conducting research on diagnostic error to discuss some of the issues that contribute to making diagnostic accuracy difficult to measure.

Can diagnostic accuracy be objectively measured?

In one of our research studies [2], we asked an expert panel, consisting of consultants from different specialties working in acute care, to separately make a decision on the pieces of information in three clinical scenarios that they considered were most important in making an accurate diagnosis. For each of the clinical scenarios [(1) a female presenting to the Accident and Emergency department for the fourth time with abdominal pain; (2) a female surgical inpatient complaining of sudden chest pain two days following hemicolectomy; (3) a male brought to the Accident and Emergency department by ambulance after being found semi-conscious at home and whose symptoms include breathlessness] consensus was impossible to achieve.

In certain conditions, a reasonable or correct diagnosis can be derived from expert consensus or information from definitive tests or case note review. However as methods such as these are retrospective and subject to hindsight bias [3], they are unlikely to reflect how a diagnosis is assigned in a real life clinical setting. The shift by Singh and colleagues to examining and measuring ‘missed opportunities’ rather than ‘diagnostic errors’ [4, 5, 6] attempts to address the difficulties in attaining agreement on what constitutes a diagnostic error. This approach looks back through the diagnostic process in a particular case to determine whether the presentation, situation and context would have allowed for the consideration of different choices or actions and determine whether a quicker or correct diagnosis would have been attained, had a different decision been made or course of action pursued. This can be applied to all steps in the diagnostic process and assists with identifying where missed opportunities occurred and the factor that contributed. This, along with our own experience [2] indicates that in some contexts defining what is a correct or incorrect response in order to diagnose a case presentation is challenging and not always straightforward. It also highlights that reasoning for diagnostic accuracy can be subjective and based on expert opinion that may be influenced by factors such as specialty, previous experience and treatment preferences.

The influence of case management on diagnostic accuracy

Both senior and junior doctors who have collaborated in our research have stressed that when a patient presents with signs and symptoms for diagnosis in a real-life clinical situation, the role of the clinician is often to stabilise or ameliorate the patient’s presenting symptoms, whether or not a clear diagnosis has been made. Whether a clinicians’ initial diagnosis is guided by an immediate intuitive response to a patient’s presentation, or is ascertained through a slower and more logical process, the possible differential diagnoses is often a central factor in guiding the immediate treatment and actions implemented. In some cases, a patient’s response to either the immediate treatment (e.g. trying a Glyceryl Trinitrate spray to see if chest pain is cardiac in origin) or a longer period of management (e.g. eliminating different foods to ascertain whether chronic lower gastrointestinal symptoms are due to a food intolerance) may assist doctors in reaching a definitive diagnosis or may indicate that they need to think again. This suggests that diagnosis and case management are often linked processes.

The link between diagnosis and case management is further supported in research. Kostopoulou and colleagues presented 84 family doctors with seven difficult computer-based diagnostic scenarios for which they could seek further information for diagnosis and management. Scoring criteria developed from literature and expert opinion showed that inappropriate management that could have been potentially harmful was planned in 78% of incorrectly diagnosed cases [7]. While this suggests that achieving a correct diagnosis is essential for safe patient care, the reality is that clinicians are often faced with presentations that require treatment when the diagnosis is uncertain or at best a guess. In these clinical contexts it is unclear whether diagnostic accuracy or a patient’s response to the clinician’s management of signs and symptoms is responsible for the outcome. Therefore the role that case management plays in diagnostic accuracy may not always be easy to disentangle when trying to measure whether a correct diagnosis was made.

At what point in the diagnostic process should diagnostic accuracy be measured?

Confusing clinical pictures and deviations from expected trajectories may contribute to misdiagnosis or diagnostic delay [3]. For doctors, it is clear that diagnosis is not a singular event occurring at a given point in time, but an on-going messy process that may take minutes, days, weeks or sometimes years. This may be due to a natural change in a patient’s signs and symptoms, a patient’s positive or negative response to treatment and actions, the need to sometimes identify and treat multiple conditions, or caring for a patient receiving a number of treatments which may reduce or exacerbate symptoms caused by different aetiologies [3]. Moreover, the management of a patient may involve multiple diagnostic steps and decisions, perhaps becoming more focused over time. For example, if you have a stroke you need someone in casualty to recognise that and order a CT scan. Half an hour later you need an accurate diagnosis of the type of stroke and whether it is amenable to thrombolysis. After 2 weeks or so you also need to know the exact areas of the brain that are affected to help with getting the right rehabilitation therapies. At which point in the treatment pathway should the accuracy of diagnosis be measured and at what depth of accuracy is diagnosis meaningful?

Diagnosis is not a static decision making problem

Diagnosis is a dynamic process, yet many research studies approach it as a static decision problem. Drawing upon the naturalistic decision making (NDM) approach [8] and decision making as a process reliant on interconnected action sequences [9], Wears [10] proposes that although reasoned diagnostic decision making is considered to follow a logical sequence moving from initial activation, observation and identification to task definition, step formulation and execution; real-life diagnostic decision making is not this straightforward and jumps between these stages, depending on continuous feedback and reappraisal from a synthesis of intricate and changeable clinical information and outcomes. Capturing accuracy within the dynamic nature of diagnostic decisions is problematic and would require complex studies which examine all decision making actions and sequences from presentation, history taking, clinical exam, clinical tests, to diagnosis, treatment, and follow-up. This is unlikely to be achievable and would generate highly variable patterns of response that would make interpretation difficult.

Understanding decision making during the diagnostic process: how might this help?

A measure of diagnostic accuracy represents a single static decision and is unable to tell us anything meaningful about what assists or hinders diagnostic judgement. To be able to do this we need a more sophisticated approach which acknowledges the dynamic process in which it takes place. A more amenable and insightful approach may be a focus on information gathering and identifying where deviations occur and what kind of factors impact the trajectory and timeliness of progression through further stages of the diagnostic process and the possible implications for diagnosis – delay as well as misdiagnosis. This would assist in the development of appropriate interventions which facilitate or act as a barrier to these.

In order to assess the factors that influence the decisions made during the diagnostic process, it may be helpful to draw upon the work of researchers who have broken the diagnostic decision making process (e.g. a set or sequence of actions to attain a diagnosis) into distinct stages [7, 10, 11]. Schiff et al. [11] used the DEER taxonomy tool across 669 cases which involved 310 clinicians. This was used to divide the diagnostic process into seven distinct stages of access/presentation, history, physical exam, tests, assessment, referral/consultation and follow-up, in order to identify at what stage diagnostic error occurred and, what went wrong within each stage. The approach of dividing the diagnostic process into distinct stages helps to identify the specific actions requiring decisions within each stage and allows for the development of appropriate measures which can assess specific decisions and the factors that influence a specific diagnostic stage or action. It also enables the exploration of what the implications of decision choices at a particular stage are (e.g. delay in diagnosis); and to determine whether the decision choices at a specific stage would be likely to impact on progression through further diagnostic decision stages [7, 10, 11].

There is increasing evidence that omissions and deviations within the processes involved in the information gathering stage (e.g. presentation, history, physical examination, tests and differential diagnosis) contribute widely to diagnostic error [10, 12, 13]. Clinicians’ omissions in gathering critical information has been found to hinder diagnostic accuracy and appropriate management [10, 11, 13], while the unnecessary ordering of tests and investigations to determine the diagnosis has been linked with an increase in patient harm [13] as well as increasing costs [14]. Whether or not a doctor takes a thorough patient history or performs a physical examination has been associated with diagnostic delay [15] and differential diagnoses that are too narrow in scope have been found to lead to cases of missed diagnosis [16]. This suggests that the thoroughness and sequence in which doctors gather relevant facts during the information gathering stage may have important implications for whether they will be on the right trajectory for making an accurate and timely diagnosis. Until a valid measure of diagnostic accuracy is established, a focus on how thoroughness and order of information gathering may impact on the length of time between a patient’s initial presentation and treatment or referral may help us to understand the important roles these factors may play in diagnostic delay.

Looking beyond diagnostic accuracy

Diagnostic decision making is dynamic and takes place within a complex setting. It begins as soon as the patient presents, and in primary care may take long periods of time - sometimes weeks, months or years. Due to this, determining what is an ‘accurate’ diagnosis and the most ‘appropriate’ management can be difficult to assess.

In this paper we have argued that we can learn more about diagnostic error by increasing our understanding of the factors that influence decision making during a complex diagnostic process. As there are currently a number of different definitions of the steps involved in diagnosis [5, 10, 11], we must now aim to establish an agreed and consistent model of the diagnostic process before we can begin to measure the decisions made at each step. Clinicians can also inform us of the most appropriate way to capture real-life diagnostic decision making and the feasibility and limitations of particular types of measurement. A worthwhile and cost-effective research method may be to conduct and learn from qualitative studies with clinicians who are uniquely placed to provide knowledge of diagnostic decision making within the context in which it takes place. The tacit knowledge obtained can then be used to develop more specific and informed models and hypotheses which can then be tested using quantitative methods [17, 18]. Patients are also increasingly active partners in the diagnostic process and more work should be undertaken to assess how we can better involve them in helping to avoid missed opportunities [6, 19, 20].

We agree with Robert Wachter’s suggestion that a fixation on accuracy as an outcome may limit and overshadow the development of a research agenda that examines and develops measures of other important aspects of the diagnostic process [1]. The Institute of Medicine’s report on diagnostic error [21] later this year will stimulate a new debate about how we can make effective and meaningful progress in this field. We need to look beyond outcome measures for diagnostic accuracy if we are going to truly understand how we can improve other critical aspects of diagnostic performance.


Corresponding author: Jane Heyhoe, Bradford Institute for Health Research, Bradford, BD9 6RJ, UK, E-mail:

Acknowledgments

The authors thank two anonymous reviewers for their helpful feedback on an earlier draft of this manuscript.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Employment or leadership: None declared.

  4. Honorarium: None declared.

  5. Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.

References

1. Wachter RM. Diagnostic errors: central to patient safety, yet still in the periphery of safety’s radar screen. Diagnosis 2014;1:19–21. Search in Google Scholar

2. Heyhoe J. Affective and cognitive influences on decision making in healthcare. [PhD thesis]. Leeds: University of Leeds, 2013. Available from: White Rose eTheses Online. Search in Google Scholar

3. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis 2015;2:97–103. Search in Google Scholar

4. Singh H, Daci K, Petersen LA, Collins C, Petersen NJ, Shethia A, et al. Missed opportunities to initiate endoscopic evaluation for colorectal cancer diagnosis. Am J Gastroenterol 2009;104:2543–54. Search in Google Scholar

5. Singh H. Helping organizations with defining diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf 2014;40:99–101. Search in Google Scholar

6. Lyratzopoulos G, Vedsted P, Singh H. Understanding missed opportunities for more timely diagnosis of cancer in symptomatic patients after presentation. Br J Cancer 2015;112:S84–S91. Search in Google Scholar

7. Kostopoulou O, Oudhoff J, Nath R, Delaney BC, Munro CW, Harries C, et al. Predictors of diagnostic accuracy and safe management in difficult diagnostic problems in family medicine. Med Decis Making 2008;28:668–80. Search in Google Scholar

8. Klein GA, Orasanu J, Calderwood R, Zsambok CE, editors. Decision making in action: models and methods. Norwood, NJ: Ablex Publishing Corporation, 1993. Search in Google Scholar

9. Rasmussen J. Diagnostic reasoning in action. IEEE Trans Syst Man Cybern 1993;23:981–92. Search in Google Scholar

10. Wears RL. What makes diagnosis hard? Adv Health Sci Educ 2009;14:19–25. Search in Google Scholar

11. Schiff GG, Hasan O, Kim S, Abrams R, Cosby K, Lambert BL, et al. Diagnostic error in medicine: Analysis of 583 Physician-reported errors. Arch Intern Med 2009;169:1881–7. Search in Google Scholar

12. Reilly JB, Von Feldt JM. Letters to the editor. In the real world, faster diagnoses are not necessarily more accurate. Acad Med 2013;88:297–8. Search in Google Scholar

13. Zwaan L, Thijs A, Wagner C, van der Wal G, Timmermans RM. Relating faults in diagnostic reasoning with diagnostic errors and patient harm. Acad Med 2012;87:149–56. Search in Google Scholar

14. Newman-Toker DE, McDonald KM, Meltzer DO. How much diagnostic safety can we afford, and how should we decide? A health economics perspective. Br Med J Qual Saf 2013;22:ii11–20. Search in Google Scholar

15. Siminoff LA, Rogers HL, Thomson MD, Dumenci L, Harris-Haywood S. Doctor, what’s wrong with me? Factors that delay the diagnosis of colorectal cancer. Patient Educ Couns 2011;84:352–8. Search in Google Scholar

16. Ely JW, Kaldjian LC, D’Alessandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Fam Med 2012;25: 87–97. Search in Google Scholar

17. Barbour RS. The role of qualitative research in broadening the ‘evidence base’ for clinical practice. J Eval Clin Pract 2000;6:155–63. Search in Google Scholar

18. Morse JM. What is the domain of qualitative health research? Qual Health Res 2007;17:715–7. Search in Google Scholar

19. Graedon T, Graedon J. Let patients help with diagnosis. Diagnosis 2014;1:49–51. Search in Google Scholar

20. McDonald KM, Bryce CL, Graber ML. The patient is in: patient involvement strategies for diagnostic error mitigation. Br Med J Qual Saf 2013;22(Suppl 2):ii33–9. Search in Google Scholar

21. Institute of Medicine (IOM) of the National Academies. 2014. Diagnostic Error in Health Care. http://www.iom.edu/Activities/Quality/DiagnosticErrorHealthCare.aspx. Accessed: 24 April 2015. Search in Google Scholar

Received: 2015-5-1
Accepted: 2015-9-8
Published Online: 2015-10-17
Published in Print: 2015-12-1

©2015, Jane Heyhoe et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.