Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Diagnosis

Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / Singhal, Geeta / McDonald, Kathryn / Singh, Hardeep / Newman-Toker, David

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura

Online
ISSN
2194-802X
See all formats and pricing
More options …

A simulation-based approach to training in heuristic clinical decision-making

Ghazwan Altabbaa
  • Corresponding author
  • Clinical Associate Professor, Department of Medicine, Cumming School of Medicine, University of Calgary, Rockyview General Hospital, 7007 14th St. S.W. Calgary, Alberta T2V1P9, Canada
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Amanda D. Raven / Jason Laberge
Published Online: 2019-04-16 | DOI: https://doi.org/10.1515/dx-2018-0084

Abstract

Background

Cognitive biases may negatively impact clinical decision-making. The dynamic nature of a simulation environment can facilitate heuristic decision-making which can serve as a teaching opportunity.

Methods

Momentum bias, confirmation bias, playing-the-odds bias, and order-effect bias were integrated into four simulation scenarios. Clinical simulation educators and human factors specialists designed a script of events during scenarios to trigger heuristic decision-making. Debriefing included the exploration of frames (mental models) resulting in the observed actions, as well as a discussion of specific bias-prone frames and bias-resistant frames. Simulation sessions and debriefings were coded to measure the occurrence of bias, recovery from biased decision-making, and effectiveness of debriefings.

Results

Twenty medical residents and 18 medical students participated in the study. Twenty pairs (of one medical student and one resident) and two individuals (medical residents alone) completed a simulation session. Evidence of bias was observed in 11 of 20 (55%) sessions. While most participant pairs were able to avoid or recover from the anticipated bias, there were three sessions with no recovery. Evaluation of debriefings showed exploration of frames in all the participant pairs. Establishing new bias-resistant frames occurred more often when the learners experienced the bias.

Conclusions

Instructional design using experiential learning can focus learner attention on the specific elements of diagnostic decision-making. Using scenario design and debriefing enabled trainees to experience and analyze their own cognitive biases.

Keywords: clinical decision-making; cognitive bias; heuristics; misdiagnosis; simulation

Introduction

Cognitive biases are frequently cited as a source of diagnostic errors and a challenge to clinical decision-making. Studies have noted the pervasiveness of diagnostic errors in clinical practice [1], [2], [3], [4], [5]. Limitations related to attention, perception, memory, and knowledge make all humans susceptible to errors [6]. The potential for any error increases as complexity and uncertainty increase, familiarity decreases, and time constraints are added [7]. While the overall rate of diagnostic errors in medicine is estimated to be around 15% [8], [9], errors are more prevalent where uncertainty and ambiguity are high such as in emergency medicine, family medicine, and internal medicine (IM) [10], [11], [12], [13], [14], [15], [16], [17], [18]. Although heuristics offer advantages in terms of efficiency and resourcefulness [19], [20], they may fail in complicated healthcare environments. Cognitive bias can be defined as the failure of a heuristic strategy, although others may define bias as the systematic cognitive tendency without regard to being correct or incorrect [21], [22], [23].

Simulation has been suggested as a debiasing strategy [24], [25]. However, the best instructional design method for effective simulation approaches to foster better heuristic decision-making is unknown. The effectiveness of this approach may depend upon effective debriefing approaches. The primary objective of this study was to evaluate the effectiveness of a simulation curriculum developed to (a) create situations where learners are susceptible to cognitive bias, (b) provide targeted feedback during debriefing to recognize source of biases, and (c) help learners reframe approaches to diagnostic decision-making.

Methods

This study recorded participants’ behaviors in a simulated clinical environment to observe for events that may occur in real practice. The project was reviewed using “A pRoject Ethics Community Consensus Initiative” (ARECCI) screening tools and a conversation with an ARECCI reviewer to confirm our procedures for maintaining the confidentiality of subjects [26]. All study subjects provided written consent before participation, and data collection procedures were completed in compliance with the simulation center policies to ensure subject confidentiality.

Study setting

This simulation curriculum was designed and deployed at the Rockyview General Hospital Internal Medicine Simulation Program (www.medsimcalgary.com) in Calgary, Alberta, in collaboration with the Human Factors department at Alberta Health Services. Data collection was completed over a 5-month period. Each of the four scenarios was repeated monthly for different groups of learners.

Every simulation session started with a prebriefing, signed confidentiality statements, signed consent for video recording when applicable, and orientation to the simulation environment. Simulation activities were facilitated by an internist simulation educator and a confederate registered nurse. Debriefing occurred in the simulation laboratory immediately after the event, and the reflection was facilitated by the Promoting Excellence and Reflective Learning in Simulation (PEARLS) debriefing script [27].

The debriefing process adhered to procedures followed in our simulation program which is accredited by the Royal College of Physicians and Surgeons of Canada. The debriefing content and strategies were developed and standardized during the 3-month pilot phase prior to data collection.

Cognitive biases

We selected four cognitive biases for this study based on our estimates of prevalence, fit within existing clinical scenarios, expected resonance with learners, and reproducibility across simulation sessions. Based on the literature, we defined behavioral indicators that participant learners would exhibit if they were experiencing each bias [1], [2], [28], [29] (See Table 1).

Table 1:

Cognitive biases selected for inclusion in the study and behavioral indicators.

We designed each scenario to create situations where participants could experience a specific cognitive bias within four different presentations in IM. We developed a means of increasing the risk of bias for each scenario (e.g. handover of a diagnosis from another clinician). All scenarios featured a clinical problem with a broad differential diagnosis (e.g. shortness of breath). The correct diagnosis and the most likely incorrect diagnosis were established by the authors, who also defined the correct and incorrect actions for each scenario (Table 2). The incorrect actions aligned with the behavioral indicators that learners would exhibit if they were experiencing the relevant cognitive bias. These incorrect actions were the basis for video coding.

Table 2:

Scenario, presentation, diagnosis, actions and how learners may experience bias.

All four scenarios were validated during a 3-month pilot period with three different groups of learners. This pilot phase was conducted prior to any data collection to test each scenario for alignment between the learning objectives of the clinical case, flow of the simulation event, and debriefing strategies. The instructional design followed a model of diagnostic decision-making [30], [31] which highlights external uncertainties in clinical decision-making (Figure 1); the latter was used to design susceptibility to bias into the simulation. Debriefings focused on exploring learners’ internal uncertainty and the specific frames driving the actions at the time when learners experienced bias-prone simulation events. A frame is a mental model that helps people make sense of external reality [32].

Diagnostic decision-making model (Adapted from Zuk 2008, Croskery 2009).
Figure 1:

Diagnostic decision-making model (Adapted from Zuk 2008, Croskery 2009).

Participants

Different groups of learners at the Rockyview simulation program rotate monthly and typically include medical students, post-graduate year (PGY) 1 IM residents, and PGY 1 subspecialty residents rounding on the IM teaching unit. All learners attended a total of 12 simulation sessions (eight were standard curriculum, not part of this study) as part of the monthly IM teaching curriculum. They did not receive the objectives of the study or other teaching about cognitive biases during their rotation. Before the study simulation sessions, learners had participated in at least one non-study simulation session in order to allow for familiarization with the simulation modality and environment.

Protocol

Each scenario was run during weekly simulation sessions for five consecutive monthly blocks using a randomized scenario order. Thus, all four simulation scenarios were experienced by five consecutive monthly cohorts. Different participant learner pairs acted as the diagnostic team in each scenario. Two scenarios (one confirmation bias scenario and one order-effect bias scenario) were run a second time (with different learners) due to problems with the initial video recordings.

Each session started with a 15-min prebriefing period to establish a safe learning environment [33]. Usually there are between 10 and 15 learners at every simulation session. After the prebriefing, two learners volunteered to participate in the simulated scenario as the diagnosticians; the remainder of the learners went to the observation room. The two participant learners and simulation educator were taken to a simulation room designed to mimic an emergency department room where the educator provided an overview and orientation to the simulation environment.

This orientation consisted of a review of the features and limitations of the high-fidelity mannequin (SimMan® 3G, Laerdal Medical Canada, Ltd., Toronto, Canada), available equipment (e.g. vital signs monitor, oxygen delivery source and devices, personal protective equipment, and hand hygiene), phones, and electronic or printed resources (e.g. online access, hospital policies, and books). In addition, participants were oriented to clinical data sources such as a confederate nurse, a confederate family member, hospital records (emergency department chart and discharge summaries), and a computer screen where the results of the investigations [e.g. electrocardiogram (EKG), X-ray and lab results] would be displayed. The educator clarified the aspects of distortion of realism such as time (e.g. all results of the investigations available with short delay, consultants or help made available after a reasonable amount of assessment and independent decision-making), medications (e.g. no real medications were used), or personnel (scripted confederates such as nurse acting in her role and family members for some scenarios).

Participant learners then left the simulation room and a few seconds later were brought back in by the confederate emergency department nurse who provided the initial patient presentation. The scenario proceeded in a semi-scripted fashion directed by the simulation educator depending on how the participant learners managed the case as outlined in Table 2. Responses to learners’ actions (e.g. physiological parameters of the mannequin, results of investigations, and reactions by confederates) were scripted but adjusted as necessary during the scenario by the simulation educator to maintain the physiological and psychological fidelity of the scenario. Each scenario took an average of 20 min to complete and was stopped by the simulation educator once the overall assessment (and sometimes, initial treatment) was completed and the patient was ready for transition to the next phase of patient care (e.g. admission to hospital). After simulation concluded, the simulation educator and all confederates met for a debriefing session in the observation room (Figure 2).

Protocol of the simulation scenarios.
Figure 2:

Protocol of the simulation scenarios.

Debriefing

Debriefing was performed after each scenario by one simulation educator who is certified by the Society of Simulation in Healthcare and is trained in the Community of Inquiry Model [34], [35]. The model explores the rationale and frames behind observed actions including the process of diagnostic decision-making. The primary goal of this facilitated reflection was to teach about the clinical learning objectives of the simulation scenario. A secondary goal was to identify the source of each of the four selected cognitive biases and their consequences. Specifically, a three-step change model was initiated first by bringing awareness to the current way of thinking (i.e. unfreezing the bias-prone mental model), promoting a different way of thinking (i.e. the change step), and finally, stabilizing and solidifying the new norm of thinking (i.e. refreezing to a new bias-resistant mental model) [36], [37], [38]. Table 3 shows examples for both types of frames.

Table 3:

Examples of bias-prone and -resistant frames with countermeasures for each bias.

Outcomes and measurements

Video and audio recordings of the simulation scenarios and debrief sessions were captured. A spreadsheet was developed to code the following variables from the simulation and debrief sessions:

Simulation scenario (experiencing bias) –

  1. Evidence of bias (Yes/No) – based on observations of incorrect actions for each scenario.

  2. If bias is noted, did the learner recover (Yes/No) – based on observations of the correct actions following any evidence of the bias.

  3. Used countermeasures/strategy (Yes/No) – based on whether the learner used any predefined countermeasures to recover from or avoid the bias (See Table 3).

Debrief sessions (changing the way learners think) –

  1. Explored frames related to bias (Yes/No) – based on whether the learner described any basis for their action/lack of action related to the bias.

  2. Evidence of change in frames (Yes/No) – based on whether the learner mentioned in their final “take home message” comments that they have changed their frame or reinforced an existing frame that minimizes bias.

Two reviewers from the Human Factors team independently coded the videos of the simulation scenarios and debriefing sessions using the coding spreadsheet. The inter-rater reliability was 80% for the 100 questions (five questions for each of the 20 sessions) that were double-coded from the videos. The two reviewer spreadsheets were merged and where discrepancies existed, the reviewers discussed and came to a consensus on the coding. Though all responses from the raters were reviewed, the consensus discussion focused primarily on the 20 questions where there was disagreement.

In addition, the reviewers scored the facilitator on their conduct of the debrief session, noting whether the facilitator exhibited behaviors (such as safety of learning environment, clarifications, and addressing learning questions) that helped close knowledge or performance gaps or provided a new frame.

Statistical analysis

Analyses were based on the coding results using the nominal scale (Yes/No) described earlier. Descriptive statistics illustrated how learner pairs responded differently to each of the four cognitive biases. An exploratory chi-squared (χ2)-test helped assess whether changes in thinking varied as a function of experiencing bias. Due to the exploratory and applied nature of the study, a liberal statistical significance alpha value of p<0.10 was used [39], [40].

Results

Baseline characteristics

A total of 38 participant learners (18 scenarios×2 participants per scenario, 2 scenarios×1 participant per scenario) representing 20 different learner pairs participated in this study. All learner pairs had one resident and one medical student with the exception of two scenarios with two different residents because a medical student was unavailable. Learners had the following demographics: male=17 (44.7%), female=21 (55.3%), medical student=18 (47.3%) and junior resident=20 (52.7%).

Outcomes

Table 4 summarizes the simulation curriculum evaluation results. Overall, 11 of 20 (55%) learner pairs showed some evidence of bias during the simulated scenarios: four of five pairs showed evidence of the momentum bias, three of five pairs showed playing-the-odds bias, and two of five pairs showed confirmation bias and order-effects bias. When there was evidence of bias, most pairs were able to recover (8/11, 73%) even though almost all pairs (18/20, 90%) used a defined countermeasure (i.e. an action taken to counteract a cognitive bias, see examples in Table 3). There were three scenarios (two for momentum bias, one for playing the odds) where learners did not recover from cognitive bias. In all three scenarios, the learners initiated treatment for the incorrect diagnosis, which represents a missed (or at least delayed) diagnosis and potential medical error.

Table 4:

Count of the number of learner pairs that responded to each cognitive bias during simulation scenarios and debriefing sessions.

Analysis of debriefings showed that 19 of 20 (95%) sessions were able to explore learners’ frames regarding bias. All (20/20) debriefing conversations were also judged to fit naturally within the collected observations and issues brought up by the learners. Just over half of the learners (11/ 20, 55%) showed evidence of a change in frames when stating their take-home message. The remainder of the take-home messages (45%) reflected the other aspects of the clinical scenarios such as medical concepts or teamwork. The momentum (4/5, 80%) and playing-the-odds (5/5, 100%) biases accounted for most of the changes in frames. Three of five participants experienced playing-the-odds bias but all five of five participants changed frames, which may suggest that learning can happen whether a participant has experienced the bias or not.

How learners respond when experiencing bias

Figure 3 shows how the results vary as a function of whether the learners showed evidence of bias. The use of countermeasures was useful in both avoiding (eight of nine) and recovering (eight of eight) from bias. There were two learner pairs that showed evidence of bias that did not recover despite using countermeasures. Both involved the momentum bias scenario where the learner pairs worked through a comprehensive history and partial differential diagnosis, but still ended up initiating treatment for the incorrect diagnosis of congestive heart failure. One learner pair was influenced by pressure from the family member confederate while the other pair was unable to find enough evidence to steer them away from a cardiac etiology for shortness of breath.

Distribution of learner pair responses as a function of whether there was evidence of bias.
Figure 3:

Distribution of learner pair responses as a function of whether there was evidence of bias.

There was no meaningful difference between learner pairs that showed evidence of bias compared to those that showed no evidence in terms of successfully exploring frames (11 of 11 vs. eight of nine) and debriefing on the effects of cognitive bias (11 of 11 vs. nine of nine).

An exploratory χ2 test assessed the relationship between evidence of a change in frame and evidence of bias (Table 5) and suggested that more learner pairs changed frames when they also showed evidence of bias, χ2 [1]=3.1, p=0.08. Put another way, if learners show evidence of experiencing bias, they are more likely to change their thinking.

Table 5:

Changes in frames as a function of showing evidence of bias.

Discussion

The study evaluated a simulation curriculum that enables learners to experience and learn about four cognitive biases. Simulations can present a realistic picture of uncertainties and variations in healthcare practice [24]. Our curriculum was designed to induce internal and external uncertainties during clinical presentations and contexts where intuition and heuristics could dominate. The goal was to create a situation where learners were likely to experience cognitive biases under pressure but in a safe learning environment.

Curriculum evaluation demonstrated success in designing scenarios using the simulation teaching modality. Debriefings were able to explore learners’ frames of mind regarding four biases and provide targeted feedback around mitigating cognitive bias. Facilitated reflection allowed the educator and learner to unfreeze existing frames and then refreeze a new frame. A key finding is that the effects of cognitive bias can be successfully explored via debriefing, regardless of whether bias is directly experienced by learners.

The evaluation indicated that learners experiencing the bias were more likely to show a change in frames at the end of the simulation debrief. This reinforces the notion that it may be easier to educate on cognitive bias when the teaching modality incorporates an experiential component. However, the result should be interpreted cautiously in light of the small number of observations [39], [40].

Two of the scenarios (momentum bias and playing the odds) in this evaluation were more effective in terms of learners experiencing the bias and changing frames. These two biases may be more impacted by external factors and therefore easier to create susceptibility to scenario design. For instance, with momentum bias, an initial diagnosis is established and passed from person to person without evidence. Playing the odds tends to be based on a perceived odds judgment, which we created by passing on the likelihood to the learner [i.e. “this is our fourth pulmonary embolism (PE) this week”]. Inducing bias in the other two scenarios was harder. Confirmation bias tends to be more internal – i.e. the learner needs to become fixated on the diagnosis, which may be more difficult to create in a short simulation scenario. The risk for the order-effect bias was created through a handover. However, we think that the learners may have had a heightened awareness and were more attentive because the handover was provided by a confederate. Thus, it was difficult to create a situation where the correct information is given, but not attended to or recalled.

There are several limitations to this study. First, the evidence of learning was evaluated by examining whether learners had changed their frames based on their take-home message in the debriefing. We did not evaluate the retention of the change in frames in a follow-up simulation or other assessment. Second, there was no distinction between individual or team decision-making and the monitoring of individual or group bias [41], [42]. Third, the analysis did not take into account the use of diagnostic decision aids, which were made available and may have a role in inducing or mitigating cognitive bias.

Future directions for the curriculum may be to include learners from other specialties or designing a curriculum that integrates different teaching modalities such as lectures or small group practice sessions [43], [44].

Training to manage the risk of cognitive biases in clinical decision-making is challenging. The study shows that cognitive biases can be integrated effectively into a simulation curriculum and that by using an experiential simulation, learners may be exposed to the sources and factors contributing to failures in heuristic decisions. Reinforcing more bias-resistant frames and encouraging the use of specific bias recovery and avoidance countermeasures could help future diagnosticians improve their diagnostic decision-making.

References

  • 1.

    Gandhi TK, Kachalia A, Thomas EJ, Puopolo AL, Yoon C, Brennan TA, et al. Missed and delayed diagnoses in the ambulatory setting: a study of closed malpractice claims. Ann Intern Med 2006;145:488–96. PubMedCrossrefGoogle Scholar

  • 2.

    Graber ML, Wachter RM, Cassel CK. Bringing diagnosis into the quality and safety equations. J Am Med Assoc 2012;308:1211–2. Web of ScienceCrossrefGoogle Scholar

  • 3.

    Newman-Toker DE, Pronovost PJ. Diagnostic errors – the next frontier for patient safety. J Am Med Assoc 2009;301:1060–2. Web of ScienceCrossrefGoogle Scholar

  • 4.

    Singh H. Editorial: helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf 2014;40:99–101. CrossrefPubMedGoogle Scholar

  • 5.

    National Academies of Sciences E. Improving Diagnosis in Health Care [Internet]. 2015 [cited 2017 Sep 14]. Available at: https://www.nap.edu/catalog/21794/improving-diagnosis-in-health-care

  • 6.

    Wickens CD, Lee J, Liu YD, Gordon-Becker S. Introduction to human factors engineering, 2nd ed. Upper Saddle River, NJ: Pearson, 2003:608. Google Scholar

  • 7.

    Kirwan B. The validation of three human reliability quantification techniques – THERP, HEART and JHEDI: part 1 – technique descriptions and validation issues. Appl Ergon 1996;27:359–73. PubMedCrossrefGoogle Scholar

  • 8.

    Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(5 Suppl):S2–23. Web of SciencePubMedCrossrefGoogle Scholar

  • 9.

    Elstein AS. Thinking about diagnostic thinking: a 30-year perspective. Adv Health Sci Educ Theory Pract 2009;14(Suppl. 1): 7–18. CrossrefPubMedGoogle Scholar

  • 10.

    Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med 1991;324:370–6. CrossrefGoogle Scholar

  • 11.

    Thomas EJ, Studdert DM, Burstin HR, Orav EJ, Zeena T, Williams EJ, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000;38:261–71. PubMedCrossrefGoogle Scholar

  • 12.

    Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The quality in Australian health care study. Med J Aust 1995;163:458–71. CrossrefPubMedWeb of ScienceGoogle Scholar

  • 13.

    Peabody JW, Luck J, Jain S, Bertenthal D, Glassman P. Assessing the accuracy of administrative data in health information systems. Med Care 2004;42:1066–72. CrossrefPubMedGoogle Scholar

  • 14.

    Weingart SN, Wilson RM, Gibberd RW, Harrison B. Epidemiology of medical error. Br Med J 2000;320:774–7. CrossrefGoogle Scholar

  • 15.

    Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005;165:1493–9. CrossrefPubMedGoogle Scholar

  • 16.

    Winters B, Custer J, Galvagno SM, Colantuoni E, Kapoor SG, Lee H, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf 2012;21:894–902. CrossrefPubMedWeb of ScienceGoogle Scholar

  • 17.

    Ely JW, Kaldjian LC, D’Alessandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Fam Med 2012;25:87–97. Web of ScienceCrossrefPubMedGoogle Scholar

  • 18.

    Meyer AD, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013;173:1952–8. CrossrefPubMedWeb of ScienceGoogle Scholar

  • 19.

    Klein GA, Orasanu J, Caldenwood R. Decision making in action: models and methods. Norwood, NJ: Praeger, 1993:448. Google Scholar

  • 20.

    Yates JF. Judgment and decision making. Englewood Cliffs, NJ: Prentice Hall College Div, 1990:400. Google Scholar

  • 21.

    Tversky A, Kahneman D. Belief in the law of small numbers. Psychol Bull 1971;76:105 10. CrossrefGoogle Scholar

  • 22.

    Kahneman D, Slovic P, Tversky A, editors. Judgment under uncertainty: heuristics and biases. Cambridge: Cambridge University Press, 1982:544. Google Scholar

  • 23.

    Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med Off J Soc Acad Emerg Med 2002;9:1184–204. CrossrefGoogle Scholar

  • 24.

    Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med J Assoc Am Med Coll 2003;78:775–80. CrossrefGoogle Scholar

  • 25.

    Schmidt E, Goldhaber-Fiebert SN, Ho LA, McDonald KM. Simulation exercises as a patient safety strategy: a systematic review. Ann Intern Med 2013;158(5 Pt 2):426–32. CrossrefPubMedWeb of ScienceGoogle Scholar

  • 26.

    Alberta Innovates. (2010). ARECCI Ethics Screening Tool. Available at: aihealthsolutions.ca/arecci/screening/

  • 27.

    Eppich W, Cheng A. Promoting Excellence and Reflective Learning in Simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc J Soc Simul Healthc 2015;10:106–15. CrossrefGoogle Scholar

  • 28.

    Cosby KS, Croskerry P. Patient safety: a curriculum for teaching patient safety in emergency medicine. Acad Emerg Med Off J Soc Acad Emerg Med 2003;10:69–78. CrossrefGoogle Scholar

  • 29.

    Richardson WS. We should overcome the barriers to evidence-based clinical diagnosis! J Clin Epidemiol 2007;60:217–27. CrossrefPubMedWeb of ScienceGoogle Scholar

  • 30.

    Croskerry P. Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract 2009;14(Suppl. 1):27–35. CrossrefPubMedGoogle Scholar

  • 31.

    InnoVis – Publications [Internet]. [cited 2017 Sep 17]. Available at: http://innovis.cpsc.ucalgary.ca/Publications/Zuk:2008:VU

  • 32.

    Gentner D, Stevens A. Mental models, 1st ed. Hillsdale, NJ: Lawrence Erlbaum Associates, 1983:356. Google Scholar

  • 33.

    Rudolph JW, Raemer DB, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc J Soc Simul Healthc 2014;9:339–49. CrossrefGoogle Scholar

  • 34.

    Rudolph JW, Simon R, Dufresne RL, Raemer DB. There’s no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc J Soc Simul Healthc 2006;1:49–55. CrossrefGoogle Scholar

  • 35.

    Garrison DR, Anderson T, Archer W. Critical inquiry in a text-based environment: computer conferencing in higher education. Internet High Educ 1999;2:87–105. CrossrefGoogle Scholar

  • 36.

    Lewin K. Resolving social conflicts: selected papers on group dynamics, 1st ed. New York: Harper & Row, 1948. Google Scholar

  • 37.

    Schein EH. Kurt Lewin’s change theory in the field and in the classroom: notes toward a model of managed learning. Syst Pract 1996;9:27–47. CrossrefGoogle Scholar

  • 38.

    Kurt Lewin: groups, experiential learning and action research [Internet]. infed.org. 2013 [cited 2017 Sep 21]. Available at: http://infed.org/mobi/kurt-lewin-groups-experiential-learning-and-action-research/

  • 39.

    Wickens CD. Statistics. Ergon Des Q Hum Factors Appl 1998;6:18–22. CrossrefGoogle Scholar

  • 40.

    Howell DC. Fundamental statistics for the behavioral sciences, 9th ed. Australia, Boston, MA: Wadsworth Publishing, 2016:647. Google Scholar

  • 41.

    Sunstein CR, Hastie R. Wiser: getting beyond groupthink to make groups smarter. Boston, MA: Harvard Business Review Press, 2014:272. Google Scholar

  • 42.

    Salas E, Cooke NJ, Rosen MA. On teams, teamwork, and team performance: discoveries and developments. Hum Factors 2008;50:540–7. Web of SciencePubMedCrossrefGoogle Scholar

  • 43.

    Bonwell CC, Eison JA. Active Learning: Creating Excitement in the Classroom. 1991 ASHE-ERIC Higher Education Reports [Internet]. ERIC Clearinghouse on Higher Education, The George Washington University, One Dupont Circle, Suite 630, Washington, DC 20036-1183 ($17; 1991) [cited 2018 Dec 14]. Available at: https://eric.ed.gov/?id=ED336049

  • 44.

    Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses, Revised and Updated [Internet]. Wiley.com. [cited 2018 Dec 14]. Available at: https://www.wiley.com/en-ca/Creating+Significant+Learning+Experiences %3A+An+Integrated+Approach+to+Designing+College+Courses%2C+Revised+and+Updated-p-9781118124253

About the article

Corresponding author: Ghazwan Altabbaa, MD MSc CHSE FRCPC, Clinical Associate Professor, Department of Medicine, Cumming School of Medicine, University of Calgary, Rockyview General Hospital, 7007 14th St. S.W. Calgary, Alberta T2V1P9, Canada, Phone: +403 943-8693, Fax: +403 943-8535


Received: 2018-08-26

Accepted: 2019-03-17

Published Online: 2019-04-16

Published in Print: 2019-06-26


Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

Research funding: None declared.

Employment or leadership: None declared.

Honorarium: None declared.

Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.


Citation Information: Diagnosis, Volume 6, Issue 2, Pages 91–99, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2018-0084.

Export Citation

©2019 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in