In 2015, the National Academies of Sciences and Medicine put forth several calls to action to decrease diagnostic errors given their significant impact on the healthcare system, including to “Enhance health care professional education and training in the diagnostic process ”. There is much debate about which educational strategies can improve the diagnostic process. Strategies grounded in script theory and dual process theory (DPT), focusing on knowledge reorganization and guided reflection, have shown promise , , . DPT stipulates that diagnostic reasoning arises from two interacting cognitive processes – intuitive, pattern recognition System 1 and slow, analytical System 2; the latter can be engaged through diagnostic time outs and by reflection on the diagnostic process . Educational strategies related to script theory focus on knowledge organization and include teaching about problem representations (sentence summaries of the patient’s presentation in abstract terms) and illness scripts (organized mental representations of diseases stored in memory) , , .
Many medical schools lack an explicit curriculum in clinical reasoning (CR) citing limited faculty expertise among the barriers . Learners value faculty who can explain their CR in a structured way, but faculty struggle when demonstrating their own reasoning without a framework , , , , . Therefore, faculty development efforts are needed to train faculty to teach CR. Few reports of such programs exist , , . In this report, we describe the development and preliminary evaluation of a workshop to train faculty to teach CR using educational strategies from the CR literature.
This workshop was one of three in a year-long faculty development program (FDP) (from November 2017 to November 2018) to improve teaching skills. Each workshop focused on a different skill (CR, observation and feedback, and assessing the struggling learner). Topics covered were based on a needs assessment of key stakeholders including interviews with medical school deans and program directors as well as a survey of program participants. This work met the New York University School of Medicine’s criteria for certification as a quality improvement project (not a human subjects research project) using a self-certification process to ensure that the data were not collected for research purposes, there was minimal risk of harm, and the primary goal of the project was to improve teaching performance.
Group Observed Structured Teaching Exercise
To assess baseline performance, faculty participated in a Group Observed Structured Teaching Exercise (GOSTE) in November 2017. Based on the work of Holmboe et al., we chose a GOSTE rather than an OSTE to foster collaborative and peer-assisted learning , . The GOSTE case simulated ward round teaching: participants were tasked with leading a discussion with three standardized learners (SLs) (medical student, intern, and resident) focusing on the learners’ CR in their approach to a case. SLs evaluated faculty performance on a scoring rubric on a three-point Likert scale with behavioral anchors for “not done”, “partially done”, and “well done” on four skills: use of problem representation and illness scripts, approaches to broadening the differential diagnosis, strategies to prioritize the differential diagnosis, and use of diagnostic time outs. All SLs underwent two, 3-h training sessions including one session focused on rubric scoring and calibration of scores. One year after the pre-GOSTE, participants completed a post-GOSTE (the same case as the pre-GOSTE) to assess change in performance.
We developed the content for this 3-h workshop using our needs assessment, educational literature on teaching CR, and our years of experience implementing CR curriculum in our medicine residency program (Table 1) , , . We introduced participants to the impact diagnostic errors have on the healthcare system and evidence-based educational strategies to teach CR. We reviewed the concepts of problem representation, illness scripts, and diagnostic time outs in a case-based interactive didactic. Subsequently, participants engaged in role-play exercises on teaching CR on ward rounds and received guided feedback from trained facilitators and program participants.
At the end of the workshop, participants completed (1) retrospective pre-post assessments (RPPs) that measured change in confidence and likelihood of using their newly acquired skills on an eight-point Likert scale (ranging from “not at all likely/confident” to “very likely/confident”) and (2) commitment to change statements (CTCs). After 2 months, participants self-reported whether they had fully, partly, or not yet implemented (without anchoring statements) their CTCs as well as barriers to implementation.
Results of the RPPs were analyzed using a Wilcoxon signed-rank test. Results of the GOSTE were dichotomized into “not done/partly done” and “well done” (allowing the investigators to accommodate for rater bias) and results were reported as percent “well done”. An independent sample t-test, adjusted according to Levene’s test of homogeneity of variance, was conducted to compare the effect of the GOSTE from pre- to post- on each domain of the scoring rubric.
Twenty-six medicine faculty (nine hospitalists and 17 subspecialists), practicing for 1–17 years (median of 5), completed the CR workshop in December 2017. Twenty faculty completed both the pre- and post-GOSTE. Participants significantly improved in their use of problem representation and illness scripts from pre- to post-GOSTE. In the pre-GOSTE, 10% [standard deviation (SD)=30.5%] received a rating of “well-done” compared to 38% (SD=49.8%) in the post-GOSTE (p=0.028). There was no significant difference in the “well done” ratings between the pre- to post-GOSTE for use of diagnostic time outs, teaching approach to broadening the differential diagnosis, or strategies to prioritize the differential diagnosis.
After participating in this workshop, faculty were significantly more confident in their ability and more likely to teach CR using problem representations, illness script theory, and diagnostic time outs with a large effect size in all domains (Figure 1).
In CTCs, participants created specific plans for incorporating CR teaching strategies into their educational settings. At 2-month follow-up, 81% of participants had partially enacted their CTCs. Frequently cited barriers to implementation were insufficient time, logistical barriers, and forgetting. Only 4% of faculty reported insufficient skills.
Representative comments from CTCs include:
“When with a fellow who struggles with organizing their thoughts, [I will] focus on using (and teaching) the concept of problem representation as a means to systematically think through a case and create a differential.”
“In pivotal moments in case presentations, I will take a diagnostic time-out to discuss what fits/doesn’t fit.”
Similar to national reports, our faculty lacked a framework for teaching CR , , , , , . After participating in this 3-h workshop, faculty demonstrated increased ability to use problem representation and illness scripts in their CR teaching in the post-GOSTE. Faculty also expressed greater confidence and an increased likelihood to teach CR using a framework. Through CTCs, participants created specific plans for incorporating CR strategies into their educational settings and at 2-month follow-up the majority had partially implemented their CTCs.
Although educational strategies to teach CR are widely reported in the literature, few reports describe successful interventions to train faculty to teach using these methods , , . Our study adds to the findings of Dhaliwal (2013) who reported on a similarly structured workshop which increased faculty’s understanding of CR teaching methods and their intent to apply those methods in practice . More recently, Addy et al. (2016) published their experiences with training faculty to teach CR using a modified Bayesian approach. They observed five faculty and evaluated their implementation of CR skills by reviewing video recordings of teaching sessions. They reported variable degrees of implementation and similar barriers to what we found including lack of time, forgetting to incorporate, and insufficient skills .
One of our study limitations is that we have not directly observed whether faculty are implementing skills learned in this workshop in the real-world clinical environment. This is a challenge shared by many FDPs where learning occurs in workshops , , . Other limitations of this study include a small sample size, a self-selected group of faculty, and the use of self-reported data. Future evaluations of this program aim to include learner evaluations and direct observations of faculty workplace teaching skills. We plan to reevaluate the workshop structure and content for the next iteration to bring about improvement in all skills taught, including broadening the differential diagnosis, strategies to prioritize the differential diagnosis, and use of diagnostic time outs.
There is debate on how to best teach CR. Issues of context specificity generate questions such as: can we teach learners how to reason generally, or is reasoning best learned by repeated exposure to cases, thus building illness scripts? , . Providing learners with an array of strategies to approach cases, including guided reflection and knowledge reorganization, can help learners navigate the diagnostic process , . Training faculty to teach CR with these methods can reinforce learner efforts and build a shared language to give learners feedback on their CR.
In conclusion, a faculty development workshop on teaching CR increased faculty performance, confidence, and likelihood of teaching CR. Participants demonstrated improved use of problem representation and illness scripts as observed in the post-GOSTE. Participants also reported implementing changes in their CR teaching in their educational environments. A theory-based CR workshop can be an effective strategy to build a cadre of faculty empowered with the skills and the confidence to teach the next generation of physicians this core skill.
National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington, DC: National Academies Press, 2016. Google Scholar
Lambe KA, O’Reilly G, Kelly BD, Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ Qual Saf 2016;25:808–20. CrossrefPubMedWeb of ScienceGoogle Scholar
Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Acad Med 2017;92:23–30. CrossrefWeb of SciencePubMedGoogle Scholar
Kahneman D, Egan P. Thinking, fast and slow. New York: Farrar, Straus and Giroux, 2011. Google Scholar
Rencic J, Trowbridge RL, Fagan M, Szauter K, Durning S. Clinical reasoning education at US medical schools: results from a national survey of internal medicine clerkship directors. J Gen Intern Med 2017;32:1242–6. Web of ScienceCrossrefPubMedGoogle Scholar
Audetat MC, Dory V, Nendaz M, Vanpee D, Pestiaux D, Junod Perron N, et al. What is so difficult about managing clinical reasoning difficulties? Med Educ 2012;46:216–27. Web of ScienceCrossrefPubMedGoogle Scholar
Addy TM, Hafler J, Galerneau F. Faculty development for fostering clinical reasoning skills in early medical students using a modified bayesian approach. Teach Learn Med 2016;28:415–23. PubMedWeb of ScienceCrossrefGoogle Scholar
Ludwig AB, Raff AC, Lin J, Schoenbaum E. Group observed structured encounter (GOSCE) for third-year medical students improves self-assessment of clinical communication. Med Teach 2017;39:931–5. Web of SciencePubMedGoogle Scholar
About the article
Published Online: 2019-03-09
Published in Print: 2019-06-26
Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
Conflicts of interest: None of the authors on this paper have conflicts of interest to disclose.
Research funding: This faculty development program was funded by the generous donation of an anonymous donor.
Competing interests: The funding Organization(s) played no role in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.