Growing awareness of persistently high levels of diagnostic error has led to increasing calls for new approaches to diagnostic training [1, 2]. Often, the proposed approaches are focused on increasing the clinicians’ awareness of flaws (biases) associated with the heuristics (rules of thumb) believed to be a major contributor to diagnostic errors . Curiously, one factor rarely brought forward in these discussions is a description of how the ill-defined or polymorphic nature of human diseases leads to the formulation of the heuristics believed to adversely affect diagnostic performance. Simply put, given that the vast majority of human diseases lack necessary and sufficient diagnostic criteria at the bedside [i.e., criteria meeting sets of clinical signs and symptoms (S/S)], clinicians are essentially forced to formulate the diagnostic heuristics likely to induce errors. This article has two purposes: to describe, in some detail, why the ill-defined nature of human diseases forces clinicians to formulate and use the heuristic shortcuts and biased strategies that may lead to diagnostic errors; and to introduce learning sciences principles that could inform the construction of new, evidence-based diagnostic training approaches for future health care providers.
Coming to appreciate the ill-defined nature of human diseases
During the formative years of medical training, instructors, reading assignments, and textbooks give students the general impression that diseases are readily diagnosable because patients present with ‘characteristic’ sets of S/S. Students come to understand that Disease A has one particular set of characteristic S/S, Disease B yet another set of S/S, and so on. The persistent portrayal of diseases as having characteristic S/S during pre-clinical training and testing activities reinforces the students’ beliefs that their memorization is the foundation of diagnostic competence.
However, during encounters with real patients on rotations students begin to appreciate that a patient with Disease A, B, C, etc. rarely presents with all of their characteristic S/S. Given this growing realization, students likely begin searching for ‘subsets’ of these characteristics S/S in the hopes that they will serve as necessary and sufficient heuristics with which to match (rule-in) and distinguish (rule-out) the various diseases possibly causing the patient’s presentation.
As a concrete example, imagine a student considering differentials such as myocardial infarction, dissecting thoracic aortic aneurysm, pulmonary embolus and pneumothorax in a patient presentation involving acute chest pain. Clearly, each of these differentials has their own somewhat uniquely characteristic S/S (e.g., dull, sub-sternal chest pain for myocardial infarction; sharp, stabbing, piercing pain radiating to the back for dissecting aortic aneurysm, etc.). However, these same differentials also share a number of characteristic S/S such as sudden onset of pain and shortness of breath. How is a medical student to arrive at a diagnosis when a patient’s presentation consists of both S/S that are somewhat uniquely associated with each of several of these differentials and other S/S which are shared across/common to several of these same differentials? How is their mentor to support that student in understanding how and why this patient’s presenting S/S best matches one particular differential and/or are sufficient to infer one differential as the more likely diagnosis than the competing differentials? While answers to these questions remain important and yet elusive, by the time students become young clinicians, they have been gradually and perhaps unconsciously indoctrinated to believe that the formulation of largely idiosyncratic subsets of S/S serve as necessary and sufficient heuristics by which accurate diagnoses can be made at the bedside.
As a more general example, consider the situation wherein Disease A, B, C and D are each said to have eight characteristic S/S (see Figure 1; left hand columns represent Diseases A, B, C and D, respectively). Note that S/S 1, 7, 10 and 13 might be considered somewhat uniquely characteristic for Diseases A, D, C and B, respectively, but not in themselves, necessary and sufficient to make the diagnosis for their associated disease. Also note that three of the characteristic S/S of Disease A are shared with Disease B (i.e., S/S 3, 15 and 20) while three yet different S/S characteristic of A are shared with Disease C (S/S 4, 9 and 12). Furthermore, of the four S/S that are shared between Disease A and D (S/S 3, 8, 12 and 20), two of those (S/S 3 and 20) are also shared with B while one of those (S/S 12) is also shared with C. How should faculty mentors explain to their students how to arrive at a most likely/probable bedside diagnosis of A, B, C or D given any of the five example cases (numbers 1 through 5) listed to the right in Figure 1? While the mentor can and should reinforce the notion of suspending a diagnosis until all relevant S/S (for problems and diseases under consideration) have been collected, the greater truth is that simply reinforcing the need for completeness in data gathering will not enable the students to readily and accurately diagnose the cases at hand.
Somehow, health care providers become fairly proficient at dealing both with ill-defined diseases and the further complexity brought on when many of the S/S associated with these ill-defined diseases are shared, to varying degrees, with one or more of the diseases comprising the competing differentials for the problem at hand. While we should celebrate the health care provider’s capacity to succeed in the face of these ambiguities, we must also remember that the diagnostic performance of contemporary practitioners leaves much to be desired. How should we train the next generation of clinicians so that they might achieve the level of diagnostic accuracy that patients and society already expect, but do not yet receive, from contemporary clinicians?
Steps towards the development of a learning sciences based approach to diagnostic training
The learning sciences are dedicated to achieving three broad goals: 1) defining the performance characteristics of competent individuals, 2) elucidating the knowledge base structures and intellectual skills enabling the transformation of novices into competent individuals, and 3) producing instructional design guidelines likely to optimize development of the knowledge base structures and intellectual skills leading to competence .
What are the performance characteristics of competent diagnosticians?
Traditional medical curricula have long operated with the assumption that generalizable intellectual skills (e.g., critical reasoning, problem solving skills, higher order thinking) are the foundation of clinical competence. Generalizable skills which once developed, enable the attainment of high levels of diagnostic accuracy across a variety of patient problems and their associated disease differentials . Contrary to this widely held belief are studies which have repeatedly demonstrated that the diagnostic performance of medical students and physicians varies: 1) across problems (e.g., a physician may diagnose case presentations of chest pain more accurately/competently than cases presenting with hematemesis or hematuria), 2) across diseases within a problem (e.g., for the problem of chest pain, diagnosing pericarditis more accurately/competently than pneumonia), and 3) as a function of the level of typicality associated with a given case. Typicality is the degree to which the S/S in a case of a disease reflects the characteristic or prototypical S/S associated with the disease. A prototypical case portrayal of pulmonary embolus is more likely to be correctly diagnosed than an atypical case portrayal of pulmonary embolus [6–9]. Collectively, these three findings have led to the conclusion that diagnostic competence is largely predicated upon the strength of the individual’s knowledge relevant to the specific problem, diseases, and level of typicality reflected in the case at hand, rather than exclusively dependent upon the development of generalizable intellectual skills [10–13].
What are the knowledge base structures and intellectual skills underlying the development of diagnostic competence?
In studies involving the development of competence and expertise, Ericsson observed that a training environment consisting of ‘multiple focused practice opportunities and deliberate feedback’ reliably expedited the transformation of novices into competent performers [14, 15]. Artificial neural networks (ANN) have been used to explain how a learning environment consisting of an iterative cycle of stimuli and corrective feedback leads to adjustments in the neural units’ weighted response to future instances and thereby, gradual performance improvements. This research provides a compelling metaphor describing how human neural networks achieve performance improvements. That is, multiple practice opportunities and feedback lead to gradual neural weighting adjustments which over time, increase the likelihood of a correct response in subsequent exposures to similar stimuli .
More recently, learning sciences researchers have suggested that dual processing (System 1 and System 2) theories can provide a deeper understanding of the types of knowledge base structures and intellectual skills (information processing mechanisms) resulting from multiple, deliberate practice opportunities and focused feedback . In regards to knowledge base structures, these theories suggest that the development of diagnostic competence occurs via the accumulation and storage of multiple case instances of a given disease, and, the ongoing reformulation of an increasingly robust representation of the most salient S/S useful for ruling in/out a given disease. The internalized representations of prior case instances, and, the weighting rules associated with the most salient S/S for a given disease comprise, respectively, the core knowledge bases used by non-analytical (pattern recognition) processes, and analytical (rule-based) processes utilized during differential diagnosis.
Learning sciences principles that can inform the construction of future diagnostic training programs
Current approaches to diagnostic training do not enable medical students to reliably diagnose case vignettes similar to ones previously solved [18, 19]. In medical education, this problem is referred to as the ‘content’ or ‘case specificity’ phenomenon while in the learning sciences, it is referred to it as the ‘transfer problem’ . Fortunately, there is body of literature suggesting a strategy of training to support transfer: focus on a single problem and its multiple causes, and, use multiple practice opportunities followed by immediate performance assessment and corrective feedback. In diagnostic training, such a strategy would enable the construction of the knowledge base structures (i.e., stored cases and disease/feature weighting rules) theorized as underlying transfer, and subsequently lead to improvements in diagnostic performance [21–23].In medical education, researchers have demonstrated that both human  and computer-mediated  instructional activities focused on a single problem and its associated diseases, multiple practice opportunities, and immediate feedback supports diagnostic transfer even in the face of ill-defined diagnostic criteria .
The author has utilized this evolving understanding of the ill-defined nature of diseases, the evolution of diagnostic competence, and dual processing related models of mind, to formulate three learning sciences principles that can inform the construction of future diagnostic training programs. First, given that diagnostic competence is knowledge-based and problem- and disease-specific, construct problem-focused diagnostic instructional modules that provide learners with the knowledge most relevant to the specific problem (e.g., acute chest pain) and the diseases associated with that problem. Second, in the context of problem-focused instructional modules, provide students with multiple practice opportunities against cases representing the problem’s core disease differentials, and arrange those cases along a typicality gradient (from easy to hard). Practice against multiple case portrayals of decreasing typicality (increasing atypicality) would serve as a basis for incremental adjustments in disease/feature weighting rules enabling the gradual development of transfer capabilities against easy (typical) cases to increasingly more difficult (less typical) case portrayals. Third, provide disease- and case-specific feedback sufficient to expedite the construction of a knowledge base consisting of increasing numbers of case instances and continually refined disease/feature weighting rules.
The author suggests that the ill-defined nature of human diseases is a root cause of persistent and high levels of diagnostic error in that it essentially forces practitioners to formulate largely idiosyncratic heuristics with their own inherent flaws and biases. Further, there is little evidence to suggest that contemporary approaches to training and assessing diagnostic capabilities are predicated upon a codified, learning sciences-derived approach to diagnostic reasoning. Subsequently, the persistence of higher than acceptable levels of diagnostic error would appear to be the outcome of a systematically flawed approach to training and assessing diagnostic capabilities caused by medicine’s adherence to traditional, non-evidence based approaches to education. The author suggests that learning sciences findings elucidating the factors contributing to the development of diagnostic competence, and models of how humans reason in the face of uncertainty and complexity (i.e., Dual Processing theories), be utilized as a set of guiding principles in formulating the first steps towards a codified, 21st century approach to training and assessing the diagnostic capabilities of future health care providers.
The author acknowledges the contributions of Robert Hamm, PhD during the final editing of this manuscript.
Graber ML, Berner ES. Diagnostic error: is overconfidence the problem? Am J Med 2008;121:S2–23.Google Scholar
Sawyer RK. Introduction: the new science of learning. In Sawyer RK, editors. The Cambridge handbook of: the learning sciences. New York: Cambridge University Press, 2006:1–16.Google Scholar
Papa FJ, Harasym PH. Medical curriculum reform in North America, 1765 to the present: a cognitive science perspective. Acad Med 1999;74:154–64.Google Scholar
Eva KW. On the generality of specificity. Med Educ 2003;37:587–8.Google Scholar
Papa FJ, Elieson W. Diagnostic accuracy as a function of case prototypicality. Acad Med 1993; 69:S58–60.Google Scholar
Papa FJ, Stone RC, Aldrich DG. Further evidence of the relationship between case typicality and diagnostic performance: implications for medical education. Acad Med 1996;71:S10–12.Google Scholar
Bloom BS, editor. Taxonomy of educational objectives: the classification of educational goals. Handbook I: cognitive domain. New York: David McKay Company, 1956.Google Scholar
Anderson LW, David R, Krathwohl DR. Airasian PW, Cruikshank KA, Mayer RE, Pintrich PR, Raths J, Wittrock MC, editors. A taxonomy for learning, teaching, and assessing: a revision of bloom’s taxonomy of educational objectives. Boston: Allyn & Bacon, 2001.Google Scholar
Glaser R. Education and thinking: the role of knowledge. Am Psychol 1984;39:93–104.Google Scholar
Barrows HS, Tamblyn RM. Problem-based learning: an approach to medical education. New York: Springer Publishing, 1980.Google Scholar
Ericsson KA, Krampe RT, Tesch-Romer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev 1993;100:363–406.Google Scholar
Ericsson KA. The search for general abilities and basic capacities: theoretical implications from the modifiability and complexity of mechanisms mediating expert performance. In Sternberg RJ, Grigorenko EL, editor. Perspectives on the psychology of abilities, competencies, and expertise. New York: Cambridge University Press, 2006:93–125.Google Scholar
Churchland PS. Neurophilosophy: towards a unified science of mind/brain. Cambridge: MIT Press, 1992.Google Scholar
Evans SB. Dual-processing accounts of reasoning, judgment and social cognition. Annu Rev Psychol 2007;59:255–78.Google Scholar
Elstein AS, Shulman LS, Sprafka SA. Medical problem solving: an analysis of clinical reasoning. Cambridge: Harvard University Press, 1978.Google Scholar
Kimball DR, Holyoak KJ. Transfer and expertise. In Tulving E and Craik FI editors. The Oxford handbook of memory. New York: Oxford University Press, 2000:109–22.Google Scholar
Ericson KA. Deliberate practice opportunities and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004;79(10 Suppl):S70–81.Google Scholar
Bransford JD, Brown AL, Cocking RR. How people learn: brain, mind, experience, and school. Committee on developments in the science of learning. commission on behavioral and social sciences and education, National Research Council. Washington, DC: National Academy Press; 2000.Google Scholar
Kirschner PA, Sweller J, Clark RE. Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educ Psychol 2006;41:75–86.Google Scholar
Hatala RM, Norman GR, Brooks LR. Practice makes perfect: The critical role of mixed practice in the acquisition of ECG interpretation skills. Adv Health Sci Educ Theory Pract 2003;8:17–26.Google Scholar
Papa FJ, Oglesby MW, Aldrich DG, Schaller F, Cipher DJ. Improving diagnostic capabilities of medical students via application of cognitive sciences-derived learning principles. Med Educ 2007;41:419–25.CrossrefGoogle Scholar
About the article
Published Online: 2014-01-08
Published in Print: 2014-01-01
Citation Information: Diagnosis, Volume 1, Issue 1, Pages 125–129, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2013-0013.
©2014 by Walter de Gruyter Berlin/Boston. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0