Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Diagnosis

Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / Singhal, Geeta / McDonald, Kathryn / Singh, Hardeep / Newman-Toker, David

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura


CiteScore 2018: 0.69

SCImago Journal Rank (SJR) 2018: 0.359
Source Normalized Impact per Paper (SNIP) 2018: 0.424

Online
ISSN
2194-802X
See all formats and pricing
More options …

The effect of cognitive debiasing training among family medicine residents

Brent W. Smith
  • Corresponding author
  • Eglin Air Force Base – Family Medicine, 307 Boatner Road, Eglin Air Force Base, FL 32542, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Michael B. Slack
Published Online: 2015-04-17 | DOI: https://doi.org/10.1515/dx-2015-0007

Abstract

Background: Debiasing education has been recommended for physicians in training. We report on the efficacy of a workshop designed to aid family medicine residents recognize and respond to their risk of misdiagnosis due to cognitive biases during patient care.

Methods: Residents participated in a debiasing workshop in which they were taught to recognize and respond to cognitive biases likely to contribute to misdiagnosis. Metacognition was introduced and cognitive forcing strategies were demonstrated and practiced. While precepting clinic visits, attendings evaluated residents in the following areas: 1) diagnostic concordance between resident and attending, 2) ability of the resident to perceive their risk of cognitive bias, 3) the quality of the resident’s plan to mitigate this risk, and 4) the presence of an unrecognized cognitive bias. Pre and post workshop data were compared.

Results: Preceptor concurrence with the residents’ diagnoses was unchanged – 74% (63 of 85) vs. 78% (45 of 58, p=0.64). Residents’ ability to recognize their risk of cognitive bias was unchanged – 51% (43 of 85) vs. 57% (33 of 58, p=0.46). Residents’ formulation of an acceptable plan to mitigate the effect of cognitive bias increased from 84% (36 of 43) to 100% (33 of 33, p=0.02). Preceptors’ perception of an unrecognized cognitive bias in the residents’ presentation was unchanged – 12% (10 of 85) vs. 9% (5 of 58, p=0.55).

Conclusions: A debiasing workshop for family medicine residents demonstrated improvement in one of four studied outcomes.

Keywords: cognitive error; debiasing; education; resident; training

Introduction

Misdiagnosis is increasingly recognized as a significant cause of patient harm [1, 2]. Cognitive error due to bias is believed to contribute to a large proportion of misdiagnoses [3]. Experts in the study of misdiagnosis have called for educational interventions aimed at raising the awareness of cognitive biases and decreasing their potentially harmful effect on diagnostic accuracy [4, 5]. Some medical schools and residencies have implemented curricula to address cognitive bias and misdiagnosis. Few studies have been published documenting the efficacy of these attempts at education and training [6].

The educational approach to cognitive debiasing we present here included didactic introduction to cognitive biases, analysis of possible biases in case presentations involving misdiagnosis, introduction to meta-cognition and practice of corrective responses (cognitive forcing strategies). This approach is in line with published recommendations for debiasing training [7]. It is also similar to the approach of others who have attempted comparable educational interventions among physicians in training [8, 9]. Our approach differs from those previously published in 4 important ways: 1) assessment during actual patient care; 2) assessment of learners’ perception of their situational risk of cognitive bias; 3) assessment of learners’ efforts to reduce the negative impact of potential cognitive bias; 4) assessment by expert clinicians via direct observation.

Materials and methods

Study population/participants

The study location was the Family Medicine Residency Program at David Grant Medical Center, Travis Air Force Base, California, USA. Nineteen family medicine residents participated in the educational intervention. Eight were PGY3, six were PGY2, and three were PGY1. Eight were female and 11 were male.

IRB approval and informed consent

The Institutional Review Board of David Grant Medical Center approved this study. The Board concluded the study represented an educational intervention which did not require documentation of signed individual informed consent. While participation in the workshop was required, participation in the study was optional. No residents declined participation.

Intervention

A reading exercise was assigned to all participants two weeks prior to the workshop. The five page reading assignment contained the following: research statistics and quotes from experts highlighting the importance and prevalence of misdiagnosis among medical errors; an introduction to the concepts and vocabulary related to cognitive error; an introductory case of misdiagnosis due to cognitive error; and reflective exercises encouraging residents to begin thinking about their thinking.

One week after completing the reading assignment, available residents participated in the interactive lecture portion of the workshop which spanned two 45 min sessions. During the interactive lectures, residents were taught about five cognitive biases believed to contribute to misdiagnosis: availability, framing, blind obedience, anchoring and premature closure. The concept and practice of monitoring and evaluating the quality of their thought processes was introduced, explained and demonstrated. Two actual cases of misdiagnosis were studied and possible cognitive contributors to the misdiagnoses were explored. The involved physician in one of the cases was present and able to share insights in to their thinking that may have contributed to misdiagnosis.

Residents then practiced identifying cognitive errors in two different cases and proposed specific cognitive forcing strategies to minimize the negative effect of bias and avoid misdiagnosis. Special emphasis was placed on identification of clinical situations which pose an increased risk for biases, such as seeing the same diagnosis repeatedly or evaluating a patient recently diagnosed by another physician. The preventative strategy highlighted was for clinicians to become alert to situations where bias is highly likely then responding appropriately. Participants were not expected to recognize when their thinking is biased as bias is often unconscious.

Assessment tools

Prior to resident assessment, each preceptor participated in a one hour training session where they were introduced to the concept of cognitive bias and how it may contribute to misdiagnosis. The difference between error due to bias vs. a lack of resident knowledge or skill was explained. The data collection forms were reviewed.

Then, while precepting continuity clinic visits involving a new diagnosis, preceptors prompted residents with the following three questions: 1) What is your initial diagnosis? 2) What about this case might be negatively affecting your judgment and possibly leading you to choose an incorrect diagnosis? 3) What can you do to decrease your risk of choosing an incorrect diagnosis in this case? The answers were recorded and the case was then precepted in the usual fashion.

Once done precepting the case with the resident, the attending completed an evaluation form with the four study questions: 1) Did you agree with the resident’s diagnosis? Y/N 2) Did the resident identify a risk for cognitive error? Y/N 3) If yes, do you feel the proposed strategy offered by the resident will be effective in lowering their risk of misdiagnosis? Y/N 4) Do you feel there is a cognitive error which poses a risk for misdiagnosis not addressed by the resident? Y/N

The data collection forms were created specifically for the assessment task and were validated during a pilot study and by expert review (Figure 1). Assessments were done over a 2-week period prior to the workshop and for 20 weeks following the workshop.

Debiasing workshop evaluation form.
Figure 1:

Debiasing workshop evaluation form.

Resident assessment was based on their presence in clinic during the evaluation periods, leading to the possibility that some residents were assessed multiple times and others not at all. Only residents who participated in the workshop were assessed during the post workshop evaluation, resulting in a smaller pool of residents due to schedule and rotation conflicts.

Analysis

Responses to the four research questions were dichotomous (yes/no) and were compared using the χ2-test. The χ2-test was used to test the hypothesis that there is no difference between the pre and post workshop responses.

Results

88 precepted encounters were evaluated from a potential pool of 30 residents over a 2-week period prior to the workshop. 58 precepted encounters were evaluated from a potential pool of 19 residents over a 20-week period following the workshop. Responses to question 3 (resident’s formulation of an acceptable plan to mitigate the effect of cognitive error due to bias) were analyzed only for cases where the resident had first identified a risk for cognitive error (question 2) for that particular encounter.

Preceptor concurrence with the residents’ diagnoses was unchanged – 74% (63 of 85) vs. 78% (45 of 58, p=0.64). Residents’ ability to recognize their risk of cognitive bias was unchanged 51% (43 of 85) vs. 57% (33 of 58, p=0.46). Residents’ formulation of an acceptable plan to mitigate the effect of this cognitive bias increased from 84% (36 of 43) to 100% (33 of 33, p=0.02). Preceptors’ perception of an unrecognized cognitive bias in the residents’ presentation was unchanged – 12% (10 of 85) vs. 9% (5 of 58, p=0.55). The improved acceptability of the residents’ proposed risk reduction plan was statistically significant. Otherwise none of these results reached statistical significance (Table 1).

Table 1

Pre and post assessment of resident performance during precepted clinical encounters.

Discussion

The main objective of our workshop was to increase residents’ ability to recognize their risk for misdiagnosis due to cognitive bias during patient care and propose a risk reducing response. Though positive trends were demonstrated, our workshop failed to show statistically significant improvements in most of the workshop objectives.

Two previous studies on the effectiveness of cognitive debiasing training among physicians in training are similar enough to serve as comparisons to our study.

Reilly et al. developed a year-long curriculum aimed at increasing internal medicine residents’ awareness and understanding of cognitive biases. Their curriculum included multiple sessions, small group activities and an online component that taught about cognitive error due to bias and strategies to avoid misdiagnosis. Their report showed some gains in resident knowledge of cognitive biases based on scores of a pre and post-curriculum multiple choice test. Following their training, residents also demonstrated an ability to recognize cognitive biases and suggest mitigating strategies during videos of simulated patient care [8]. There was no assessment of resident behavior during actual patient care.

Sherbino et al. reported a prospective study of senior medical students who received training in cognitive biases and strategies for lowering their negative impact on diagnosis. The educational intervention consisted of a 90 min interactive lecture where the biases of search satisficing and availability were taught. Case studies were reviewed and cognitive forcing strategies practiced. Students were later presented with six test cases – some cases with more than one active diagnosis to test for search satisficing bias, other cases involving an uncommon diagnosis to test for availability bias. The authors concluded their results failed to show any improvement in diagnostic accuracy or diminished susceptibility to cognitive bias between the intervention group and an untrained control group [9].

Our study is most comparable in both structure and outcomes to Sherbino’s. This would suggest that educating physicians in training to recognize and avoid their risk of misdiagnosis due to cognitive bias is ineffective. While this may be the case, the lack of significant improvement in our studied outcomes could reasonably be attributed to ineffective instruction and/or measurement strategies rather than the inability of any such training to benefit participants.

There is reason to believe our training approach was simply ineffective. A reading assignment and 2 hours of instruction are likely insufficient to significantly change our learners’ understanding and recognition of their risk for error due to cognitive biases – biases which are deeply wired and, by their nature, difficult to perceive. Possibilities for improvement in our teaching methods include more training episodes and more reinforcement after the initial training. The addition of video simulation such as in Reily’s curriculum, or even better, recording of actual patient care highlighting the potential development of bias could be especially helpful [8]. We purposely asked our evaluating preceptors not to lead or teach their residents about bias during the evaluation periods, as this would represent further training and thus confuse our effectiveness testing. Yet this type of continued reinforcement during patient care might be of great benefit.

An inconsistent assessment process may have significantly reduced our ability to detect and measure the effectiveness of our workshop. Though all our preceptors received training in cognitive error and bias, it was brief, provided no model of how to perform the assessment and offered no opportunity to practice. Not surprisingly, it was evident while reviewing their resident evaluation forms that significant confusion remained. There was a large disparity in our preceptors’ understanding and application of the concepts being taught as well as their interpretations of the residents’ patient presentations. It was sometimes unclear to our preceptors what constituted a ‘cognitive error’ or a bias vs. a lack of medical knowledge, for example. Attendings, themselves unfamiliar with their evaluation task, probably overestimated the residents’ ability to perceive and describe their risk for cognitive error. This could explain the unexpectedly high pre-workshop performance of our learners which in turn contributed to our low effect size. Whether a risk reducing response seemed appropriate was left to the subjective judgment of the preceptors as no quality criteria or anchors were provided to them. In the future, more time and greater effort could be placed on evaluator training, perhaps using video or live demonstrations of what clinical situations that raise the risk of cognitive bias might look like from the perspective of a precepting attending. As training a large number of faculty is logistically challenging, another option might be to use a smaller number of well trained preceptors to eliminate evaluation variability. This would decrease the number of assessments that could be done in a given time, but would likely raise the quality. Finally, video of the interaction between preceptor and resident could be recorded, reviewed and evaluated by an additional, well trained evaluator instead of relying solely on a preceptor’s real-time evaluation.

Despite the lack of significant effect on our studied outcomes, we feel our workshop highlights a number of valuable teaching approaches to the topic of bias and misdiagnosis. Identifying ‘situational awareness’ as a measurable sub-step in avoiding misdiagnosis due to bias seems valuable. Learner assessment during actual patient care rather than through a proxy such as written exam or patient simulation seems very important. The use of diagnostic concordance between attending and resident as a surrogate for diagnostic accuracy is compelling, since diagnostic accuracy is what we are ultimately after, yet it is hard to measure. This surrogate requires more exploration but has face-validity among front line clinician-educators. Formally validating this surrogate measure in the future could aid in developing and measuring novel attempts to decrease diagnostic error in medical trainees. Though we did not break down our data analysis to this level, it would be interesting to see how the effectiveness of cognitive debiasing differs (or does not differ) related to training year. Building on the strengths and addressing the weaknesses in our study represents promising opportunities for future research in the crucial area of preventing misdiagnosis.

References

  • 1.

    Weingart NS, Wilson R, Gibberd RW, Harrison B. Epidemiology of medical error. West J Med 2000;172:390–3.Google Scholar

  • 2.

    Graber ML. The incidence of diagnostic error in medicine. Br Med J Qual Saf 2013;22:ii21–7.Google Scholar

  • 3.

    Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005;165:1493–9.Google Scholar

  • 4.

    Trowbridge RL. Twelve tips for teaching avoidance of diagnostic errors. Med Teach 2008;30:496–500.Web of ScienceCrossrefGoogle Scholar

  • 5.

    Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. Br Med J Qual Saf 2013;22:ii28–32.Google Scholar

  • 6.

    Graber ML, Kissam S, Payne VL, Meyer A, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. Br Med J Qual Saf 2012;21:535–57.Google Scholar

  • 7.

    Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. Br Med J Qual Saf 2013;22:ii58–64.Google Scholar

  • 8.

    Reilly JB, Ogdie AR, Von Feldt JM. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. Br Med J Qual Saf 2013;22:1044–50.Google Scholar

  • 9.

    Sherbino J, Kulasegaram K, Howey E. Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: a controlled trial. Can J Emerg Med 2014;16:34–40.Google Scholar

About the article

Corresponding author: Brent W. Smith, Eglin Air Force Base – Family Medicine, 307 Boatner Road, Eglin Air Force Base, FL 32542, USA, E-mail:


Received: 2015-02-20

Accepted: 2015-03-17

Published Online: 2015-04-17

Published in Print: 2015-06-01


Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission. The opinions and assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the Medical Department of the US Air Force or the US Air Force at large.

Research funding: None declared.

Employment or leadership: None declared.

Honorarium: None declared.

Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.The entire curriculm evaluated in this report is available for free with attribution at http://www.improvediagnosis.org/?ClinicalEducation under Cognitive Errors Curricula, Cognitive Debiasing Workshop for Medical Residents.Please contact the author with any opportunity to collaborate on future studies: .


Citation Information: Diagnosis, Volume 2, Issue 2, Pages 117–121, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2015-0007.

Export Citation

©2015, Brent W. Smith et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Jennita G. Meinema, Nienke Buwalda, Faridi S. van Etten-Jamaludin, Mechteld R.M. Visser, and Nynke van Dijk
Academic Medicine, 2019, Volume 94, Number 2, Page 281
[2]
Martin J. Tobin
Clinics in Chest Medicine, 2019, Volume 40, Number 2, Page 243
[3]
Rebecca Jean Featherston, Aron Shlonsky, Courtney Lewis, My-Linh Luong, Laura E. Downie, Adam P. Vogel, Catherine Granger, Bridget Hamilton, and Karyn Galvin
Research on Social Work Practice, 2018, Page 104973151881916
[5]
Miguel A. Vadillo, Fernando Blanco, Ion Yarritu, and Helena Matute
Experimental Psychology, 2016, Volume 63, Number 1, Page 3
[6]
Ramona Ludolph and Peter J. Schulz
Medical Decision Making, 2017, Page 0272989X1771667
[7]
John E. Brush, Jonathan Sherbino, and Geoffrey R. Norman
The American Journal of Medicine, 2017, Volume 130, Number 6, Page 629

Comments (0)

Please log in or register to comment.
Log in