Campaigns for fundamental change need to push on all fronts. The movement to reduce diagnostic error in medicine is correct to address simultaneously the cognition of the physician, the coordination of the medical team, and institutional support for the provision of accurate and timely information. Yet on the topic of the physician’s diagnostic thinking, much more attention has been devoted to recognizing and correcting mental shortcuts and strategic shortcomings than to the mastery and maintenance of basic diagnostic competency. These are equally essential.
Physicians’ thinking is the cause of too many diagnostic errors. A recent review checking records of all hospital patients showed that the proportion of adverse diagnostic events due at least in part to cognitive error was 84% . The diagnostic error rate revealed by autopsies has been stable over recent decades (though the proportion of deaths deemed to need autopsy is decreasing), with 10% showing misdiagnosis that contributed to the patient’s outcome, and another 35% showing incidental diagnostic error . Though much misdiagnosis may be attributed to system failures [3, 4], and promising work has been done discovering ways to reorganize practice at the system level, still it is worth exploring how to reduce the number of diagnostic errors attributable to physicians’ cognitive errors.
On the ImproveDx email discussion listserve of the Society to Improve Diagnosis in Medicine, there is abiding interest in how to prevent cognitive error from causing misdiagnosis. Drawing on psychology’s account of heuristics and biases , and of intuitive and analytic cognition [6, 7], it is suggested that physicians can learn to reflect on a patient’s situation, remind themselves that like all humans they use cognitive strategies that are vulnerable to error, look for how they might make such errors with this patient (metacognition), and follow corrective strategies that make such errors less likely . The promise of this vision is well expressed in this summary statement,
“…knowing how the metacognitive process works allows us to train students and inexperienced clinicians in its use, ultimately helping them minimize or avoid diagnostic errors” .
To focus on the potential for reducing misdiagnosis by reducing cognitive strategy error implicitly assumes that physicians have adequately mastered the basic principles of diagnosis. Diagnosis is one of the essential competencies and a focus of training and assessment at all levels of medical education . But in diagnosing, both students and experienced clinicians often fail to make full use of what is known about disease prevalence, the informativeness of findings , or the utility of appropriate treatments [11, 12]; and are resistant to instruction in the relevant principles [13, 14].
If indeed many physicians don’t know how to appropriately apply diagnostic reasoning, one might ask why we focus on the heuristic cognitive strategies that may bias diagnosis, when we could as profitably focus on physicians’ basic understanding of the principles of diagnosis and of the impacts of particular signs and symptoms on diagnoses’ likelihood? One could substitute “diagnosis” for “metacognition” in the above statement,
“… knowing how the diagnostic process works allows us to train students and inexperienced clinicians in its use, ultimately helping them minimize or avoid diagnostic errors.”
That too is a plausible vision. Yet it seems impossible to think simultaneously about both the basic diagnostic skill and the typical human cognitive strategies that can produce errors in applying diagnostic skill. In the visual figure/ground illusion (see Figure 1), sometimes we see the columns, other times the spaces between the columns. Our discussions of improving physicians diagnoses have recently been addressing only one part of the problem. The project of eliminating cognition-caused misdiagnosis should not focus solely on the figure, the awareness and suppression of heuristic strategies, and neglect the ground. I say this not because the metacognitive approach is wrong, but because it is not easy, not guaranteed effective, and not enough.
If it were easy to stop a habit with metacognitive insight, why do so many still struggle with alcohol, tobacco, anorexia, and obesity? Why is it so difficult for physicians to change their own behaviors that evidence shows are ineffective or harmful? . While insight is a legitimate component of psychotherapy, it is known to involve hard work and slow progress.
Why think metacognition is cognitively demanding? Like diagnosis, it requires knowing categories and activating them for the present situation through a pattern recognition process. Two distinct modes of applying metacognitive knowledge are implied by the “general” and “specific” levels of metacognition (table 4 of Croskerry’s exposition of the theory ). A general-level understanding of the way cognitive heuristics can produce misdiagnoses involves: knowing the categories of error-producing strategy (such as judging diagnosis probability by the patient presentation’s representativeness); knowing corresponding strategies to prevent those errors; allocating attention and effort to monitor one’s diagnostic thinking to recognize the possibility that one might have used one (or more) of the general strategies with this particular patient; recognizing which general strategy one has used in its particular manifestation and recalling an appropriate general corrective strategy; and modifying that corrective for application in the context of the particular cognitive processes used with this patient.
Metacognition is demanding because the physician would need to learn to recognize many general heuristic strategies. Hogarth  described 29, Croskerry [17, 18] listed 33, and Wikipedia’s current “List of cognitive biases” has 61 that affect memory and 95 that distort decision making, belief, and behavior. Clinicians in the ImprovedDx conversations have asserted they’ll not be learning the full set. Perhaps one could reduce the study load, concentrating on the most common or troublesome categories of cognitive bias, and on mastering the most effective debiasing techniques. For example, these cognitive errors are often mentioned: stopping diagnostic search too soon (e.g., premature closure, anchoring on first hypothesis compounded by confirmation bias), failure to consider prevalence (e.g., base rate neglect), and failure to appreciate evidence’s impact (e.g., pseudodiagnosticity). Correspondingly, there may be just a few powerful corrective strategies: for example, to prevent stopping search too soon, make oneself to look at it differently (take an outsider’s perspective, consider the opposite, use the competing hypotheses heuristic) [19–21].
Besides learning the types of cognitive bias, metacognition at the general level also requires monitoring of one’s everyday diagnosing, whether one is being vigilant for the full Wikipedia list or for a reduced set of the most important. This moment of reflection about each patient, though brief, still adds time and effort to the physician’s day. Though one might be inspired to reflect after hearing a lecture on metacognition, how long will such a commitment persist?
The specific-level mode of applying metacognitive knowledge  recognizes that physicians usually encounter familiar patient presentations, and their knowledge (illness scripts [22, 23] or internalized clinical algorithm trees) is shaped by past study and experience so that strategies are ready to apply once activated through pattern recognition. In Croskerry’s example of the patient bitten by a dog (p. 116 of ), the animal bite script should include recognition of the pitfall of not recognizing that a patient is immunocompromised, and a cognitive forcing strategy for avoiding that pitfall by asking if there are any of a list of causes of immunocompromise (e.g., splenectomy). When one has learned the particular pitfalls and corrective strategies of every presentation, there is no perceived effort in the application of metacognition. The work has been done before, learned explicitly and then made automatic through practice, turning explicit System 2 strategies into intuitive System 1 responses [21, 23, 24].
To estimate the burden of learning specific metacognitive knowledge, adding pitfalls and correctives to every script, assume that the physician needs to be able to diagnose 120 presenting complaints . To systematically search for all possible pitfalls using the Wikipedia list, there’d be 156*120 or 18,720 checks. Surely expert clinical teachers would identify a much smaller set of relevant pitfalls. Still the student will need to study each explicitly, in medical school or residency, and practice it enough that the corrective strategy comes to mind whenever needed. Whether physicians practice metacognition by monitoring their diagnostic thinking for signs they may be using heuristics that can produce misdiagnoses, or by learning a large number of specific pitfalls and correctives as part of their education, the project requires a large amount of work.
Some discussions of the promise of metacognition imply it can very effectively control cognitive errors of diagnosis. This is overly optimistic. Diagnosis is intrinsically hard because the world and our knowledge of it are uncertain [26, 27]. We don’t yet know how much the rate of misdiagnosis can be reduced through awareness of cognitive errors . There are wonderful success stories, such as the one related by a resident on ImproveDx: In 3rd year of medical school, her patient with psychiatric symptoms died; the team had missed a medical illness. Grieving and seeking understanding, she read about cognitive diagnostic errors. In residency she encountered another patient with a psychiatric diagnosis complaining of physical symptoms. This time, the resident did not accept the team’s judgment that “it is not medical,” worked up the patient as appropriate, and found two major medical problems. The resident’s anecdote affirms the value of teaching metacognition: Croskerry  had listed some particular pitfalls, including “failing to look for medical problems once a psychiatric diagnosis has been found, unwillingness to accept that patients might have more than one diagnosis,” and even accurately described how availability (a heuristic strategy) can promote the metacognitive recognition of cognitive errors (due to heuristic strategies). Thus the resident was primed by the trauma of her earlier patient, and by her reading on cognitive error, to recognize when the team failed to address the later patient’s medical problems once they had a psychiatric diagnosis.
But here is another anecdote. In my family medicine department’s mortality and morbidity rounds, the residents described a 27-year-old woman with abdominal pain and a mass near the adnexa. The FM residents feared cancer, but the OB/GYN residents declared it an abscess. Abscesses are more common causes of such complaints even though the configuration of symptoms looked very much like cancer. The patient could not have an investigational procedure until an unrelated condition was resolved. I recognized this to have the same structure as the classic demonstrations of the representativeness heuristic – the case looks more similar to the prototype of the rare disease, though the competitor is more prevalent . I gave a lecture explaining this cognitive error to the residents, though we had to leave it open whether our team had been right. Months later, the patient’s investigational procedure revealed she had an endometrioma, neither a cancer nor an abscess. In the language of heuristics and biases, this was a case of “premature closure” not “representativeness heuristic”.
The metacognitive theory, just like medical diagnosis, involves fallible pattern recognition. After we know the patient diagnosis, we can accurately categorize the cognitive error. Prospectively, we may misidentify the heuristic and apply an irrelevant corrective strategy. The strategies appropriate for protecting oneself from the neglect of base rate that the representativeness strategy might produce may not be appropriate for preventing premature closure.
If our “figure,” metacognitive awareness, is not easy enough or effective enough, should we instead focus on our “ground,” improving physicians’ mastery of the cognitive mechanics of diagnosis per se? Unfortunately, the same arguments may be made, as many have noted before: learning and applying the principles of diagnosis is not easy and teaching them has had small impact. As with metacognition, there are two distinct modes by which physicians may apply diagnostic principles as they think about their patients (see Table 1). They can apply the general principles to the particular patient, or they can recall and use specific facts and strategies appropriate for the particular patient’s presentation that they have learned before.
Education for the first mode would teach the formal principles of the optimal diagnostic process, training students to: identify relevant diagnoses and assess their prevalence; seek information that best discriminates among competing disease categories; adjust probabilities for new findings, using Bayes’ theorem (for one finding at a time) or prediction rules (combining multiple findings); and act in the light of the disease probabilities to maximize treatment utility by choosing most beneficial treatment, or treating all diseases whose probability is over a utility-based threshold. Learning these principles may impose an even greater burden than learning general metacognition. Even physicians who understand the principles well enough to teach them don’t often apply them to individual patients, for the numerical inputs for Bayes’ theorem are not easily accessible, prediction rules are hard to find, and it is hard to carry out the calculations in the clinic. A believable case has been made, however, that just knowing the general rules provides useful insight [30, 31].
To improve diagnostic reasoning through the specific-level mode would require that physicians master particular diagnostic strategies for each clinical presentation, strategies built previously through the application of diagnostic principles to the data, but integrated into the physician’s illness scripts and not requiring current calculations. Acquired through study and supervised case experience, these diagnostic scripts would allow the physician to seek information and draw conclusions based on accurate knowledge of diagnosis-finding relations. To make the principles of diagnosis more accessible to physicians, a number of thinking aids have been proposed, including the use of natural frequencies rather than probabilities , graphic representations , re-expressions of Bayes’ theorem [34, 35], and their combination (Hamm and Beasley, under review).
I have argued that metacognition is more work than conventionally recognized, and less effective than initially hoped. Based on this, I urged those seeking to improve physician diagnostic reasoning not to neglect improving physicians’ mastery of diagnosis, per se. That too takes hard and sustained work, and may have only a moderate effect . Presumably focusing on both aspects of physician diagnosis, figure and ground, will produce more improvement than focusing on either alone. So I propose another revision of Croskerry’s  sentence as an appropriate vision for reducing physician misdiagnosis:
“… knowing how the diagnostic and metacognitive processes work allows us to train students and inexperienced clinicians in their use, ultimately helping them minimize or avoid diagnostic errors.”
The author thanks Jasmina Stevanov for assistance identifying the source of the figure/ground illustration. The work by David Barker is at the San Francisco Exploratorium, http://exs.exploratorium.edu/exhibits/angel-columns/. The photograph has been originally published at http://www.flickr.com/photos/shashachu/443215138/, and has been reproduced with kind permission from the photographer Sha Sha Chu.
Zwaan L, de Bruijne M, Wagner C, Thijs A, Smits M, van der Wal G, et al. Patient record review of the incidence, consequences, and causes of diagnostic adverse events. Arch Intern Med 2010;170:1015–21.Google Scholar
Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what′s the goal? Acad Med 2002;77: 981–92.Google Scholar
Weed LL, Weed L. Medicine in Denial. CreateSpace Independent Publishing Platform, 2011.Google Scholar
Kahneman D, Slovic P, Tversky A, editors. Judgment under uncertainty: heuristics and biases. New York: Cambridge University Press, 1982.Google Scholar
Gilovich T, Griffin DW, Kahneman D, editors. Heuristics and biases: the psychology of intuitive judgment. Cambridge, UK: Cambridge University Press, 2002.Google Scholar
Evans JS, Stanovich KE. Dual-process theories of higher cognition: advancing the debate. Perspect Psychol Sci 2013;8:223–41.Google Scholar
Bergus G, Vogelgesang S, Tansey J, Franklin E, Feld R. Appraising and applying evidence about a diagnostic test during a performance-based assessment. BMC Med Educ 2004;4:20.Google Scholar
Poses RM, Cebul RD, Centor RM. Evaluating physicians’ probabilistic judgments. Med Decis Making 1988;8:233–40.Google Scholar
Noguchi Y, Matsui K, Imura H, Kiyota M, Fukui T. A traditionally administered short course failed to improve medical students’ diagnostic performance. A quantitative evaluation of diagnostic thinking. J Gen Intern Med 2004;19:427–32.CrossrefGoogle Scholar
Basinga P, Moreira J, Bisoffi Z, Bisig B, Van den Ende J. Why are clinicians reluctant to treat smear-negative tuberculosis? An inquiry about treatment thresholds in Rwanda. Med Decis Making 2007;27:53–60.Google Scholar
Hamm RM. Irrational persistence in belief. In: Kattan MW, editor. Encyclopedia of medical decision making. Thousand Oaks, CA: Sage Publications, 2009:640–4.Google Scholar
Hogarth RM. Judgement and choice: the psychology of decision. New York: John Wiley & Sons, 1980.Google Scholar
Croskerry P, Singhal G, Mamede S. Cognitive debiasing 2: impediments to and strategies for change. BMJ quality safety 2013. Epub 2013/09/03.Google Scholar
Arkes HR. Costs and benefits of judgment errors: Implications for debiasing. Psychological Bulletin 1991;110:486–98.Google Scholar
Larrick RP. Debiasing (Chapter 16). In: Koehler DJ, Harvey N, editors. Blackwell Handbook of judgment and decision making. Malden, MA: Blackwell Publishers, 2004:316–37.Google Scholar
Abernathy CM, Hamm RM. Surgical intuition. Philadelphia, PA: Hanley and Belfus, 1995.Google Scholar
Woloschuk W, Harasym P, Mandin H, Jones A. Use of scheme-based problem solving: an evaluation of the implementation and utilization of schemes in a clinical presentation curriculum. Med Educ 2000;34:437–42.CrossrefGoogle Scholar
Bursztajn H, Feinbloom RI, Hamm RM, Brodsky A. Medical choices, medical chances. New York: Delacorte, 1981.Google Scholar
Graber ML, Kissam S, Payne VL, Meyer AN, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf 2012;21:535–57.Google Scholar
Brown RV. Decision theory as an aid to private choice. Judgment and Decision Making. 2012;7:207–23.Google Scholar
Gigerenzer G, Hoffrage U. How to improve Bayesian reasoning without instruction: frequency formats. Psychological Rev 1995;102:684–704.Google Scholar
Sedlmeier P. Improving statistical reasoning: theoretical models and practical imp lications. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, 1999.Google Scholar
Van Puymbroeck H, Remmen R, Denekens J, Scherpbier A, Bisoffi Z, Van Den Ende J. Teaching problem solving and decision making in undergraduate medical education: an instructional strategy. Med Teach 2003;25:547–50.Google Scholar
Van den Ende J, Bisoffi Z, Van Puymbroek H, Van der Stuyft P, Van Gompel A, Derese A, et al. Bridging the gap between clinical practice and diagnostic clinical epidemiology: pilot experiences with a didactic model based on a logarithmic scale. J Eval Clin Practice 2007;13:374–80.Google Scholar
About the article
Published Online: 2014-01-08
Published in Print: 2014-01-01
Conflict of interest statement The author declares no conflict of interest.
Citation Information: Diagnosis, Volume 1, Issue 1, Pages 29–33, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2013-0019.
©2014 by Walter de Gruyter Berlin/Boston. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0