Diagnostic errors are important contributors to patient morbidity and mortality , and rank among the top patient safety concerns for healthcare organizations. In 2015, the National Academies of Sciences, Engineering, and Medicine called for enhanced training in the diagnostic process for all health care professionals . Within undergraduate medical education (UME), the Association of American Medical Colleges (AAMC) has also identified clinical reasoning as a key competency . However, a recent cross-sectional study of internal medicine clerkship directors demonstrated that most US medical schools lack formal educational sessions dedicated to clinical reasoning .
The most effective educational strategies to improve trainee clinical reasoning have not been established. A recent systematic review regarding dual process theory-based interventions found six published educational curricula in addition to studies of workplace interventions like checklists, cognitive forcing strategies, guided reflection, and instruction to use a particular type of reasoning . Of these six curricula, four were performed within UME and only two included teaching on individual reasoning steps and skills , . The interventions in the latter studies consisted of a case-based reasoning seminar for senior medical students. The first study focused on self-reflection and knowledge of the steps of the reasoning process , while the second highlighted the application of Bayes theorem in diagnosis . Both studies demonstrated that a case-based curriculum grounded in dual process theory can result in improved clinical reasoning as assessed by standardized patient cases and written examinations. Only one study identified in the systematic review involved assessment of reasoning in a hospital setting. This study, performed among staff-level emergency medicine physicians, assessed the impact of a checklist on extensiveness of differential diagnosis and resource utilization. There was a trend toward more expansive differential diagnoses and increased numbers of tests and consults ordered when providers used the checklist. However, the difference was not statistically significant, and the clinical impact of these changes was uncertain .
It is unknown if clinical reasoning instruction can impact student clinical reasoning within the context of authentic patient care. Therefore, we developed and implemented a clinical reasoning curriculum for medical students on an internal medicine clerkship. We hypothesized that structured clinical reasoning education would improve knowledge of clinical reasoning concepts, application of clinical reasoning concepts in patient care, and students’ perception of their own clinical reasoning education.
We conducted a pseudo-randomized and controlled study to evaluate the impact of a clinical reasoning curriculum on third-year medical students’ knowledge of clinical reasoning concepts, skills, and perception of clinical reasoning education at a single allopathic US medical school.
Third-year medical students at the University of Pittsburgh School of Medicine (UPSOM) who were enrolled in their internal medicine clerkship between January and June 2017 were eligible to participate. The students who participated in the curriculum from January to April were at the end of their third year and were referred to as “experienced”, and students who participated during May and June were in the first 2 months of their third year and were referred to as “novices”.
Existing educational structure
At UPSOM, students are randomly assigned to one of four clinical training sites for each half of their 8-week inpatient internal medicine clerkship. Three of the training sites are academic medical centers associated with UPSOM, and one site is the Veterans Administration hospital associated with the medical school and residency training programs. All hospital sites are within 3 miles of each other. Students at all sites rotate on general medicine wards and participate in patient care on a team that includes one attending physician, one senior resident, two interns, and two third-year medical students. The faculty at all sites have similar qualifications, participate in a centralized faculty development program, and are evaluated according to a uniform set of expectations for faculty promotion and teaching evaluations. The residents at each site are all part of the same training program. The students at all sites have the same set of expectations for the clerkship, including the same number of admissions or patients to follow per week, the same educational conferences, and the same expectations for feedback. All third-year students also participate in daily Student Teaching Attending (STA) sessions, which are a 1-h case-based clinical reasoning discussions with six to eight students led by a faculty member.
Students who were assigned to site 1 in the first half of their rotation formed the intervention group, and those assigned to sites 2, 3, and 4 formed the control group. Students in the intervention group participated in the curriculum on the first afternoon of their clerkship, completing the modules in a computer lab and then participating in the skills workshop. In order to ensure educational equity, students in the control group completed the modules outside of the study period. Students in both intervention and control groups otherwise had similar educational experiences throughout the internal medicine clerkship, including the previously described ward teams and the STA sessions.
The project met all necessary criteria for an exemption under section 45 CFR 46.101(b)  for educational strategies, curricula, or classroom management methods, as designated by the University of Pittsburgh Institutional Review Board (PRO16110049). Participants were given the opportunity to request that their work not be used for research purposes.
The curriculum consisted of two components: six interactive online modules and a case-based workshop.
We used online modules that were developed by faculty within the Department of Medicine at the University of Pittsburgh Medical Center (UPMC). The modules taught clinical reasoning topics that are ranked highly by expert educators , . The topics were: (1) diagnostic error, (2) cognitive psychology of decision-making, (3) specific clinical reasoning skills, including use of semantic qualifiers and problem representation, and (4) cognitive biases and heuristics (including those related to overconfidence, probability estimation, and affective biases). The modules included didactic videos, simulated clinical cases, and interactive prompts for open-ended and multiple-choice questions. Each of the modules was completed in 20 min, amounting to a total of 2 h of online content.
Clinical reasoning workshop
The second component consisted of a 1-h skills-based workshop. We provided students with a written case and prompted them to practice four specific skills: (1) identification of key clinical findings, (2) translation of clinical information into semantic qualifiers, (3) generation of a problem representation/summary statement, and (4) creation of a prioritized differential diagnosis. Students first practiced each skill in groups of three to four. A facilitated large-group discussion regarding each of the four skill-based tasks followed the small groups.
We developed the content for this workshop based on principles for curricular content described in Teaching Clinical Reasoning , including cases with prototypical illness scripts, an analysis of clinical reasoning processes during case presentations, a discussion of cognitive psychology principles that pertain to clinical reasoning, dual process thinking, and script theory. Specific content and delivery modalities were derived from reviewing the literature for common errors in medical student reasoning , and the previously mentioned case-based educational interventions , , . The workshop did not include instruction regarding how to document clinical reasoning in a hospital admission note.
We evaluated participants according to three domains: knowledge of clinical reasoning concepts, clinical reasoning skills, and student perception of clinical reasoning concepts in their clerkship experience.
We administered a 20-question post-curricular quiz to the control and intervention groups at the end of the 4-week intervention period. The quiz addressed definitions and concepts that were introduced in the online modules (Supplementary Material). The study authors developed the quiz and piloted it among medical education fellows and third-year medical students outside of the study to optimize clarity, content, and difficulty of the questions.
We used the Interpretive summary, Differential diagnosis, Explanation of reasoning and Alternatives (IDEA) tool  to assess clinical reasoning skills. IDEA was designed to assess clinical reasoning in third-year medical student hospital admission notes. The tool has three sections: (1) a data gathering section, for assessment of comprehensiveness and contextualization of the history of present illness, historical features, and physical exam; (2) a data synthesis section for evaluation of reasoning in the written assessment and plan; and (3) a global rating of the reporting, diagnostic reasoning, and decision-making skills. This tool utilizes a three-point scale for each section.
Evaluation of written justification for medical decisions is congruent with educational expectations in the UME literature , . According to AAMC’s core entrustable professional activities for UME, students are expected to be able to accurately document their clinical decision-making . Furthermore, because student hospital admission notes are included in the medical record and are evaluated by the attending physicians as part of the student’s grade (and are thus high stakes), we feel confident that the notes represented each student’s best attempt to illustrate their reasoning.
As part of the existing clerkship STA session, the students prepared one hospital admission note per week for grading and feedback by their STA physician. One investigator (DD), who was not involved in evaluation of the notes, collected these notes from all students on the medicine clerkship during the intervention period. The same investigator de-identified the notes by removing student names, identifying patient information, and study site information. Two independent investigators (EB, AF) evaluated 30% of the notes in a blinded fashion. The investigators did not provide feedback to the students and did not see the feedback provided by the STA physician. Percent agreement by item was calculated with an overall agreement of 69%, consistent with the fair to moderate agreement reported in the tool’s validation study . After calculation of percent agreement, one investigator (EB) assessed the remainder of the notes.
Perception of educational experiences
The students were surveyed regarding their attending physicians’ use of clinical reasoning terminology, their attending physicians’ delineation of their own clinical reasoning, and the utility of the clinical reasoning concepts in their daily work on the rotation. Responses were graded on a five-point Likert-style scale.
We conducted a priori power analyses to assure at least 80% power to detect a 10-point difference in average skill scores between groups, and four-point difference in knowledge scores between groups. For analysis of quiz and Likert-scale data, we compared mean group scores using t-tests for two independent samples. For analysis of IDEA tool ratings, we calculated mean scores per student overall and per item and compared groups using t-tests for two independent samples.
In secondary subgroup analyses, we compared mean IDEA tool ratings by study group assignments within each of the experienced student cohorts. We also compared mean IDEA tool ratings by study group using only admission notes submitted during week 1 of the intervention period. We performed all analyses using Stata SE 14.1 for Windows (Stata Corp, College Station, TX, USA).
A total of 67 of 68 (99%) eligible students participated in the study. One student from the control group requested to be excluded from the assessments. Overall, 40 (60%) students were in the experienced cohort, while 27 (40%) students were in the novice cohort. The intervention group consisted of 34 (51%) students, while 33 (49%) students formed the control group; this distribution of intervention and control students was the same in the experienced and novice cohorts.
At the end of the 4-week intervention period, 66 of the 67 (99%) students completed the 20-point clinical reasoning concept quiz. Students in the intervention group, on average, scored higher than students in the control group (13.30±2.30 vs. 10.88±3.04, p<0.001).
We collected 250 of 268 (93%) eligible hospital admission notes. The remaining 7% of the notes were not turned in to study investigators. There were no significant differences in mean total IDEA scores by study group assignments or by comparison of novice and experienced students. However, students in the intervention group, on average, scored higher by item in the data synthesis and the diagnostic reasoning portions of the IDEA assessment tool (Table 1). This superior performance was also seen when we limited our analysis to admission notes collected during week 1 (Appendix 1). Data stratified by experience cohort are displayed in Appendix 2.
Perception of educational experiences
At the end of the study period, 66 of 67 (99%) students completed a post-curriculum survey. Students in the intervention group reported high levels of agreement with a statement regarding use of clinical reasoning skills learned during the curriculum in their clinical work (3.83±0.70 out of 5), with experienced students reporting higher ratings than novice students (4.16±0.50 vs. 3.27±0.65, p<0.01).
Students in the intervention group reported higher agreement with the statement, “My attending explicitly discussed common clinical reasoning terminology”, in reference to both ward and STA physicians. Students in the intervention group also reported higher ratings of agreement with the statement, “My attending explicitly outlined his/her clinical reasoning on most cases”, in reference to STA physicians (Table 2).
In this pseudo-randomized and controlled study of a clinical reasoning curriculum for third-year medical students on an internal medicine clerkship, we observed superior knowledge of clinical reasoning concepts, superior utilization of clinical reasoning skills in admission notes, and increased awareness of attending physician utilization of clinical reasoning concepts in the clinical and conference room settings. To our knowledge, this study represents the first investigation of a curricular intervention’s impact on clinical reasoning skills in the context of patient care, as opposed to simulated or written clinical cases.
Clinical reasoning vocabulary and concepts can serve as a shared foundation for improvement by informing assessment methods, remediation plans, and reflection on one’s own cognitive performance . Medical students value instruction on clinical reasoning concepts  because they provide an intuitive structure for reasoning, and they find the subject helpful for improving their own clinical reasoning , . Our demonstration of knowledge durability at 4 weeks contrasts with previous studies which have demonstrated immediate knowledge changes , . Whether gaining such knowledge contributes to the development and refinement of reasoning skills is unknown.
The ultimate goal of clinical reasoning interventions is the enhancement of reasoning skills at the point of patient care. Our evaluation of student reasoning skills in the context of patient care distinguishes our study from previous investigations and suggests that the introduction of clinical reasoning concepts may contribute to improvements in reasoning skills in medical students on an internal medicine clerkship. While the absolute difference in IDEA skill ratings between our intervention and control groups was small (0.3 points on a three-point scale), this was in the context of multiple factors that were likely to bias results toward the null including the brief one-time nature of the intervention, the well-established standard educational experiences provided to both intervention and control groups, and the ample opportunities for feedback from clinical supervisors for both groups. At the time of a 2016 systematic review, no published educational intervention had evaluated an outcome higher than 2b (acquisition of knowledge or skills) on Kirkpatrick’s adapted hierarchy. Our study is the first to evaluate a Kirkpatrick Level 3 (change in behavior with transfer of knowledge to the workplace) outcome for clinical reasoning education.
The optimal methods for teaching clinical reasoning skills have not been established. A 2017 cross-sectional survey of internal medicine clerkship directors revealed that the most common settings for clinical reasoning education within existing UME internal medicine programs are attending rounds and morning report. A small percentage of programs reported sessions or online modules specifically devoted to clinical reasoning education . Our findings suggest that use of online programs coupled with clinical reasoning-specific sessions may enhance clinical reasoning education within UME.
We also found that students who were exposed to our curriculum reported an increased recognition of an explicit clinical reasoning approach in their role models. This suggests an increased awareness and identification of clinical reasoning concepts by students in the intervention group. This finding lends support to the idea that education can prime students to identify and learn from reasoning by their clinical teachers that they may have otherwise missed.
Our study has several limitations. First, it was conducted at a single US allopathic medical school, limiting generalizability. As we randomized students by site, it is possible that differences in the teachers, learning environment, or other site-specific characteristics confounded the relationship between our intervention and outcomes. For instance, it is possible that that attending physicians at the intervention site were more aware of or more facile with clinical reasoning terminology because of their proximity with the curriculum than faculty at the control sites. However, in addition to the previously mentioned similarities in attending physicians between sites, the training sites share the same population of residents and similar patient populations. Additionally, we observed superior clinical reasoning skills in the intervention group compared to controls even during the first week of the intervention period, when exposure to their clinical training site was limited. We also found that intervention group students had superior performance in areas that were targeted by our curriculum (data synthesis, diagnostic reasoning), but not in areas left unaddressed (data gathering). Collectively, these findings suggest that the curriculum (and not the site) was responsible for the outcomes.
Lastly, we relied on written documentation in hospital admission notes as a proxy for the students’ clinical reasoning. It is possible that this mode of assessment failed to capture aspects of reasoning that were not written. We also cannot exclude the possibility that students may have received feedback on the clinical reasoning documented in their notes from their attending physicians or residents who were working with them. However, we would not expect these effects to differ by site.
In conclusion, we have demonstrated that structured clinical reasoning education can lead to improved knowledge of clinical reasoning concepts and superior written clinical reasoning skills in third-year medical students on an internal medicine clerkship. Our findings suggest that online modules coupled with a clinical reasoning workshop are effective and can prime students to more readily identify reasoning processes in their clinical role models. Next steps include development and evaluation of clinical reasoning curricula for pre-clinical medical students. Medical educators should consider addition of online modules and/or workshop sessions to introduce clinical reasoning concepts and facilitate clinical reasoning skill development.
Balogh EP, Miller BT, Ball JR. Improving diagnosis in health care. Washington, DC: National Academies Press, 2015. Google Scholar
Flynn T. Core entrustable professional activities for entering residency-curriculum developers’ guide. Washington, DC: Association of American Medical Colleges, 2014. Google Scholar
Rencic J, Trowbridge RL Jr., Fagan M, Szauter K, Durning S. Clinical reasoning education at US medical schools: results from a national survey of internal medicine clerkship directors. J Gen Intern Med 2017;32:1242–6. Web of ScienceCrossrefPubMedGoogle Scholar
Lambe KA, O’Reilly G, Kelly BD, Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ Qual Saf 2016;25:808–20. CrossrefPubMedWeb of ScienceGoogle Scholar
Nendaz MR, Gut AM, Louis-Simonet M, Perrier A, Vu NV. Bringing explicit insight into cognitive psychology features during clinical reasoning seminars: a prospective, controlled study. Educ Health (Abingdon) 2011;24:496. PubMedGoogle Scholar
Graber ML, Sorensen AV, Biswas J, Modi V, Wackett A, Johnson S, et al. Developing checklists to prevent diagnostic error in Emergency Room settings. Diagnosis (Berl) 2014;1:223–31. CrossrefPubMedGoogle Scholar
Trowbridge RL, Rencic JJ, Durning SJ, editors. Teaching clinical reasoning. Philadelphia, PA: American College of Physicians, 2015. Google Scholar
Baker EA, Ledford CH, Fogg L, Way DP, Park YS. The IDEA assessment tool: assessing the reporting, diagnostic reasoning, and decision-making skills demonstrated in medical students’ hospital admission notes. Teach Learn Med 2015;27:163–73. Web of ScienceCrossrefPubMedGoogle Scholar
Smith S, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The development and preliminary validation of a rubric to assess medical students’ written summary statements in virtual patient cases. Acad Med 2016;91:94–100. Web of SciencePubMedCrossrefGoogle Scholar
Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: a longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf 2013;22:1044–50. Web of SciencePubMedCrossrefGoogle Scholar
The online version of this article offers supplementary material (https://doi.org/10.1515/dx-2018-0063).
About the article
Published Online: 2019-03-28
Published in Print: 2019-06-26
Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
Research funding: Funding support was provided by a grant from the Thomas H. Nimick, Jr. Competitive Research Fund of UPMC Shadyside Hospital and the Shadyside Foundation. Support for development of online modules was provided by a grant from the Hearst Foundations awarded to Dr. William Follansbee.
Employment or leadership: None declared.
Honorarium: None declared.
Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.