Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Diagnosis

Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / Singhal, Geeta / McDonald, Kathryn / Singh, Hardeep / Newman-Toker, David

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura


CiteScore 2018: 0.69

SCImago Journal Rank (SJR) 2018: 0.359
Source Normalized Impact per Paper (SNIP) 2018: 0.424

Online
ISSN
2194-802X
See all formats and pricing
More options …

Learning and teaching aren’t the same – the need for diagnosis curricula in graduate medical education

Michael A. Sundberg
  • Departments of Medicine and Pediatrics, University of Minnesota Medical School, 420 Delaware Street SE, MMC 741, Minneapolis, MN, USA
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Andrew P.J. Olson
  • Corresponding author
  • Departments of Medicine and Pediatrics, University of Minnesota Medical School, 420 Delaware Street SE, MMC 741, Minneapolis, MN 55455, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2020-01-09 | DOI: https://doi.org/10.1515/dx-2019-0085

Close inspection of the historical development of graduate medical education draws into question our long-held belief that diagnostic reasoning is adequately developed through experience in the clinical training environment [1]. Over the past two decades, new attention to the rate of diagnostic error in our health systems [2], [3], and to measured diagnostic performance among physicians [4], [5], points to an urgent and significant need for explicit education in diagnosis that goes beyond this traditional – and inadequate – approach [6]. When considering the value of improved diagnosis toward enhanced patient safety, higher quality of care, more efficient learning health systems, and reduced health care costs, it may come as a surprise that distinct, longitudinal curricula are not the norm.

In this issue of Diagnosis, Harris and colleagues present their efforts to implement and study a longitudinal diagnostic reasoning curriculum within an internal medicine residency [7]. Their approach provides an explicit, foundational curriculum (the bolus) that is followed by reinforcement of key concepts of diagnostic reasoning within the pre-existing educational structure of their residency program (the booster). Such work complements and builds upon prior efforts to introduce longitudinal curricula [8], [9], and thoughtfully uses the Diagnostic Thinking Inventory (DTI) to assess self-perceptions of diagnostic reasoning development among residents who complete the curriculum [10]. Such assessment allows the educators to formally evaluate the impact of their curricula in comparison to a control group – adding a component of research that remains too uncommon in curricula introduced during graduate medical education.

While the development of the curriculum presented is well-deserving of praise, we note that it is in the challenges faced by this study – challenges shared by the few existent studies of diagnosis curricula in the medical education literature – that the greatest insights may be found. One such challenge involves the validity of the measures used to evaluate the effectiveness of programs to improve diagnostic performance. Said differently, are we measuring what we want to measure?

Accurate measurement of diagnostic reasoning is a challenging undertaking. It is difficult to assess a cognitive process that occurs rapidly, involves significant integration of data, and is inherently idiosyncratic. In their study, Harris and colleagues use, and expand upon, the validated DTI as a measure for evaluating resident self-perceived growth in diagnostic reasoning. Similarly, prior work by Ruedinger and colleagues used resident surveys on awareness of bias and proficiency of diagnostic reasoning after deployment of a focused curriculum on diagnosis [8]. These measures help to understand issues such as trainee receptivity to a curriculum and their self-efficacy in improving the diagnostic process. Yet, such surveys and self-assessments are inherently subject to a social desirability bias that is difficult to overcome; trainees want to appear to have learned something. Moreover, they do not measure actual changes in trainee capability in navigating the diagnostic process accurately, appropriately, and efficiently. This is of fundamental importance, as studies show that diagnostic performance and confidence are not well – if at all – correlated [11]. Very recent work by Schaye and colleagues, as well as Iyer and colleagues, begins to merge self-assessments with testing knowledge gained or application of diagnostic reasoning from curricula [12], [13]. A movement to use systematic chart reviews, case vignettes, and standardized encounters intended to elicit or mimic pitfalls in the cognitive processes of diagnosis is also likely to be helpful in measuring the impact of new curricula.

Self-assessment as a measure, however, is not wholly futile. It is of particular interest to note the significant improvement in DTI scores during early stages of residency was plausibly related to the curriculum introduced by Harris and colleagues. That the controls used in the study did not see such changes supports that the curriculum is an effective means of changing early resident self-perceptions of proficiency in the diagnosis process – a strength not to be overlooked in diagnosis education. Specific curricula aimed at improving diagnosis may aid to do so partially by drawing attention to the need for such a curriculum and the problem of diagnostic error.

Why might the curriculum presented by Harris and colleagues be more effective for earlier trainees? The construct of diagnostic reasoning as a threshold skill provides a helpful frame to understand this finding [14]. While one is never “done” learning diagnostic reasoning, there is an emerging idea in the literature that there is a certain initial amount that must be learned in order to begin to be effective at diagnostic reasoning. This is in keeping with the best understandings of the structure of medical knowledge [15]. The finding that results were most notable early in training also suggests the possibility that other processes (i.e. within the existing clinical experience of interns) are influencing, or even confounding, the curriculum as an intervention toward improved DTI scoring. While not well described in the literature, our experience as learners and teachers suggests that the steepest improvement slope in our functional diagnostic reasoning performance occurred during our internship. This is likely due to an increase in graduated responsibility, high expectations by clinical supervisors, and the sheer number of opportunities to make diagnoses during this intense phase of training. These, and other factors, are hard to separate from a specific curricular intervention.

Despite the current challenges faced in measuring diagnosis curricula, the need for such education remains great; current work toward implementing diagnosis curricula into residency programs is therefore highly laudable. The future of diagnosis education must involve development of consistent and reliable mechanisms for assessing the development of competence in diagnostic reasoning among trainees. However, we must not let perfect be the enemy of good. We should continue to expand such curricula to more residency programs and ensure inclusion in professional training programs outside of internal medicine and pediatrics. The need to improve diagnostic performance through education is pressing – and it is exciting to see important steps being taken.

References

  • 1.

    Fischer MA, Mazor KM, Baril J, Alper E, DeMarco D, Pugnaire M. Learning from mistakes: factors that influence how students and residents learn from medical errors. J Gen Intern Med 2006;21:419–23. PubMedCrossrefGoogle Scholar

  • 2.

    Kohn L, Corrigan J, Donaldson MS, editors. To err is human: building a safer health system. A Report of the Committee on Quality of Health Care in America, Institute of Medicine. Washington, DC: National Academy Press, 2000. Google Scholar

  • 3.

    Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med 2010;363:2124–34. PubMedCrossrefWeb of ScienceGoogle Scholar

  • 4.

    van den Berge K, Mamede S. Cognitive diagnostic error in internal medicine. Eur J Intern Med 2013;24:525–9. Web of ScienceCrossrefPubMedGoogle Scholar

  • 5.

    Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121:S2–23. Web of SciencePubMedCrossrefGoogle Scholar

  • 6.

    Olson AP, Singhal G, Dhaliwal G. Diagnosis education – an emerging field. Diagnosis 2019;6:75–7. Web of SciencePubMedCrossrefGoogle Scholar

  • 7.

    Harris KI, Rowat JS, Suneja M. Embedding a longitudinal diagnostic reasoning curriculum in a residency program using a bolus/booster approach. Diagnosis 2019;7:21–5. Google Scholar

  • 8.

    Ruedinger E, Olson M, Yee J, Borman-Shoap E, Olson AP. Education for the next frontier in patient safety: a longitudinal resident curriculum on diagnostic error. Am J Med Qual 2017;32:625–31. PubMedWeb of ScienceCrossrefGoogle Scholar

  • 9.

    Reilly JB, Myers JS, Salvador D, Trowbridge RL. Use of a novel, modified fishbone diagram to analyze diagnostic errors. Diagnosis 2014;1:167–71. CrossrefPubMedGoogle Scholar

  • 10.

    Bordage G, Grant J, Marsden P. Quantitative assessment of diagnostic ability. Med Educ 1990;24:413–25. PubMedCrossrefGoogle Scholar

  • 11.

    Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013;173:1952–8. Web of SciencePubMedCrossrefGoogle Scholar

  • 12.

    Schaye V, Eliasz KL, Janjigian M, Stern DT. Theory-guided teaching: implementation of a clinical reasoning curriculum in residents. Med Teach 2019;41:1192–9. PubMedCrossrefWeb of ScienceGoogle Scholar

  • 13.

    Iyer S, Goss E, Browder C, Paccione G, Arnsten J. Development and evaluation of a clinical reasoning curriculum as part of an Internal Medicine Residency Program. Diagnosis 2019;6: 115–9. CrossrefWeb of SciencePubMedGoogle Scholar

  • 14.

    Pinnock R, Anakin M, Jouart M. Clinical reasoning as a threshold skill. Med Teach 2019;41:683–9. PubMedCrossrefWeb of ScienceGoogle Scholar

  • 15.

    Schmidt HG, Rikers RM. How expertise develops in medicine: knowledge encapsulation and illness script formation. Med Educ 2007;41:1133–9. Web of SciencePubMedGoogle Scholar

About the article

Published Online: 2020-01-09

Published in Print: 2020-01-28


Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

Research funding: None declared.

Employment or leadership: None declared.

Honorarium: None declared.


Citation Information: Diagnosis, Volume 7, Issue 1, Pages 1–2, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2019-0085.

Export Citation

©2020 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in