BY 4.0 license Open Access Published by De Gruyter October 28, 2020

Handshake antimicrobial stewardship as a model to recognize and prevent diagnostic errors

Justin B. Searns ORCID logo, Manon C. Williams, Christine E. MacBrayne, Ann L. Wirtz, Jan E. Leonard, Juri Boguniewicz, Sarah K. Parker and Joseph A. Grubenhoff
From the journal Diagnosis

Abstract

Objectives

Few studies describe the impact of antimicrobial stewardship programs (ASPs) on recognizing and preventing diagnostic errors. Handshake stewardship (HS-ASP) is a novel ASP model that prospectively reviews hospital-wide antimicrobial usage with recommendations made in person to treatment teams. The purpose of this study was to determine if HS-ASP could identify and intervene on potential diagnostic errors for children hospitalized at a quaternary care children’s hospital.

Methods

Previously self-identified “Great Catch” (GC) interventions by the Children’s Hospital Colorado HS-ASP team from 10/2014 through 5/2018 were retrospectively reviewed. Each GC was categorized based on the types of recommendations from HS-ASP, including if any diagnostic recommendations were made to the treatment team. Each GC was independently scored using the “Safer Dx Instrument” to determine presence of diagnostic error based on a previously determined cut-off score of ≤1.50. Interrater reliability for the instrument was measured using a randomized subset of one third of GCs.

Results

During the study period, there were 162 GC interventions. Of these, 65 (40%) included diagnostic recommendations by HS-ASP and 19 (12%) had a Safer Dx Score of ≤1.50, (Κ=0.44; moderate agreement). Of those GCs associated with diagnostic errors, the HS-ASP team made a diagnostic recommendation to the primary treatment team 95% of the time.

Conclusions

Handshake stewardship has the potential to identify and intervene on diagnostic errors for hospitalized children.

Introduction

Diagnostic errors occur frequently and often go unrecognized unless they harm patients via delayed, incorrect, or missed diagnoses [1], [2]. In the US, 35–51% of pediatricians report making a diagnostic error at least monthly and 82% report ever making a diagnostic error that harmed a patient [3], [4], [5]. In addition, infections are one of the most commonly identified disease processes associated with diagnostic errors [6]. Developing systems to identify and intervene on diagnostic errors in real time could significantly improve patient safety and clinical care [7], [8], [9], [10].

Antimicrobial stewardship programs (ASPs) identify and intervene on a wide variety of patient safety incidents related to antimicrobial usage. ASPs routinely correct inappropriate antimicrobial choice or dosage, prevent use of unnecessary antimicrobials, decrease adverse drug events, and develop clinical pathways that standardize patient care. Handshake stewardship (HS-ASP) is a novel and highly successful antimicrobial stewardship model developed at Children’s Hospital Colorado (CHCO) and implemented in October 2013 [11], [12], [13]. Under the HS-ASP model, a physician and pharmacist review relevant clinical data in the electronic health record (EHR) for all hospitalized patients receiving any antimicrobial at 24 and 72 h after starting treatment. Recommendations are then communicated in person to medical and surgical teams Monday through Friday.

By design, HS-ASP requires a compressed “second look” of the presenting symptoms, laboratory and radiographic results, and documented medical decision-making for hospitalized patients. HS-ASP, thus, has the potential to intervene on a wide range of potential patient safety concerns, including possible diagnostic errors. The purpose of this study was to retrospectively determine whether HS-ASP interventions identify diagnostic errors among hospitalized children, thereby offering a valuable contribution to the diagnostic process.

Materials and methods

The HS-ASP model was implemented at CHCO in October 2013. Starting in October 2014, the CHCO HS-ASP team began prospectively labeling some specific interventions as “Great Catches” (GCs). Select HS-ASP interventions were deemed GCs by the individual steward if the intervention “notably changed or had the potential to change the trajectory of patient care” (Table 1). Patient EHR documentation for all GCs from October 2014 through May 2018 were retrospectively reviewed. Based on HS-ASP recommendations, each GC was assigned one or more intervention categories including therapeutic and diagnostic interventions related to individual patient care, as well as epidemiologic interventions that identified an important issue beyond the individual patient receiving the intervention. Epidemiologic interventions identified potential hospital-wide concerns such as emerging outbreaks within the hospital, as well as patients with highly contagious diseases requiring specific isolation precautions or antimicrobial prophylaxis for close contacts (Table 2).

Table 1:

Representative great catch examples.

Description of case ASP recommendation Impact on care
Twelve-year-old with headache and fever, mild CSF pleocytosis, significantly elevated CRP. CSF PCR studies negative. Diagnosed with “viral meningitis.” ASP suggested evaluation for parameningeal infection focus. MRI brain identified cavernous sinus thrombosis with extending purulence and sphenoid sinusitis.

Safer Dx error score: 0.949
Two-month-old with GBS bacteremia and vaginal ecchymosis. Treated for late-onset GBS sepsis and team planning to consult child protection team for sexual abuse evaluation due to “vaginal bruising.” ASP educated team about known phenomenon of violaceous skin lesions in patients with GBS sepsis. Team held off on child protection team consultation which prevented family stress, negative impact on therapeutic relationship, and unnecessary use of resources.

Safer Dx error score: 0.996
Eight-month-old with CSF pleocytosis felt to be “viral meningitis,” though significantly elevated ESR/CRP. ASP suggested rethinking diagnosis and formal infectious diseases consult based on elevated inflammatory markers. ID consulted, based on history and exam recommended echocardiogram. Found to have significant coronary artery aneurysms. Patient diagnosed and treated for Kawasaki disease.

Safer Dx error score: 0.731
One-month-old with fever. CSF with one WBC, also found to have mild thrombocytopenia and elevated transaminases. Admitted to NICU and started sepsis rule out with ampicillin, acyclovir, and cefotaxime. ASP recommended parechovirus PCR testing on CSF. Parechovirus PCR was positive, antimicrobials were discontinued, and patient was discharged.

Safer Dx error score: 1.234

Table 2:

Great catch recommendation categories.

Therapeutic Diagnostic Epidemiologic
  • Medication administration error

  • Consider alternative diagnosis

  • Need for prophylaxis for close contacts of contagious disease

  • Epidemiology investigation/potential outbreak

  • Cases representative of variability in care that would benefit from clinical pathways

  • Change to patient isolation/precautions

  • Empiric therapy escalation/de-escalation

  • Additional testing needed

  • Bug-drug mismatch

  • Microbiology result interpretation

  • Inappropriate dose/duration

  • Potential adverse effect

  • Prevent/shorten hospital admission

  • Preventative care or immunization

Each intervention was scored by a non-ASP pediatric infectious disease physician (Reviewer#1, JS) using the previously validated “Safer Dx Instrument” [7], [14]. The Safer Dx Instrument (Table 3) is an 11-item rating survey designed to retrospectively determine whether a given clinical scenario involved diagnostic error defined as “missed opportunities to make a correct or timely diagnosis based on the available evidence, regardless of patient harm” [7]. When first describing The Safer Dx instrument, Al-Mutairi et al. used multivariate logistic regression to develop an equation to calculate the overall “error score” of any given clinical scenario using the Safer Dx Instrument. The final equation they described was: Error Score=0.395 + (∑Question 1, 2, 5–7, 9, 10 × 0.03) + (∑Question 3, 8 × 0.003) + (Question 4 × −0.005) + (Question 11 × 0.05) [7]. Using this equation, the lowest possible error score is 0.656 and the highest is 1.961. The lower the error score, the stronger the association with diagnostic error. Al-Mutairi et al. offer the example of using an error score ≤1.50 to designate presence of a diagnostic error; we used this same cutoff of an error score ≤1.50 to identify GCs associated with a diagnostic error [7]. A 12th question was also included and stated: “In conclusion, based on all the above questions, the episode of care under review had a diagnostic error.” The results of reviewers’ answers to question 12 were included as a dichotomized variable for whether the reviewer felt the case included a diagnostic error overall.

Table 3:

The safer Dx instrument [7].

Rate the following items for the episode of care under review: (1 strongly agree; 6 strongly disagree)
  1. The history that was documented at the patient–provider encounter was suggestive of an alternate diagnosis, which was not considered in the assessment.

  2. The physical exam documented at the patient–provider encounter was suggestive of an alternate diagnosis, which was not considered in the assessment.

  3. Diagnostic testing data (laboratory, radiology, pathology or other results) associated with the patient–provider encounter were suggestive of an alternate diagnosis, which was not considered in the initial assessment.

  4. The diagnostic process at the initial assessment was affected by incomplete or incorrect clinical information given to the care team by the patient or their primary caregiver.

  5. The clinical information (i.e., history, physical exam or diagnostic data) present at the initial assessment should have prompted additional diagnostic evaluation through tests or consults.

  6. The initial assessment at an earlier visit was appropriate, given the patient’s medical history and clinical presentation.

  7. Alarm symptoms or “red flags” (i.e., features in the clinical presentation that are considered to predict serious disease) were not acted upon at an earlier assessment.

  8. Diagnostic data (laboratory, radiology, pathology or other results) available or documented at the initial assessment were misinterpreted in relation to the subsequent final diagnosis.

  9. The differential diagnosis documented at the initial assessment included the subsequent final diagnosis.

  10. The final diagnosis was an evolution of the initial presumed diagnosis.

  11. The clinical presentation was not typical of the final diagnosis.

Safer Dx error score=0.395 + (∑Item 1, 2, 5–7, 9, 10×0.03) + (∑Item 3, 8×0.003) + (item 4×−0.005) + (item 11×0.05).

To measure interrater reliability (IRR) of the Safer Dx scores, a second non-ASP pediatric infectious disease physician (Reviewer #2, JB) reviewed a random sample of 55 of 162 (34%) GCs and was blinded to the first reviewer’s scores. The need for 55 chart reviews was calculated based on previously described nomograms for kappa coefficient calculation [15]. Prior to completing these 55 GC reviews, five non-randomized GCs were reviewed by both reviewers together to calibrate scoring using the Safer Dx instrument and develop a shared mental model as recommended [16]. Results for the final Safer Dx scores from both reviewers and the dichotomized item 12 (“yes/no this case involved a diagnostic error”) were compared; a kappa coefficient and 95% confidence intervals (CI) were calculated for both measures.

Descriptive statistics were presented for patient demographic data and intervention data. A χ2-test was performed to compare the percentage of diagnostic intervention recommendations for patients with a diagnostic error to those without a diagnostic error. This project was approved by the CHCO Organizational Research Risk and Quality Improvement Review Panel.

Results

From October 1, 2014 through May 31, 2018, there were 85296 inpatient admissions to CHCO, of which 35576 (42%) received an antimicrobial during hospitalization. The HS-ASP team formally intervened on 6735 (19%) antimicrobial-associated admissions during the study period. Among HS-ASP interventions, 174 (2.6%) were labeled by HS-ASP stewards as GCs. After retrospective review by Reviewer #1, 12 GCs were excluded for not meeting the definition of GC, chart duplication, or inadequate documentation of HS-ASP intervention, leaving 162 GCs for analysis (Table 4). Nearly half of the GCs (48%) involved interventions with the general medical teams or surgical services, and one third (35%) occurred in an intensive care setting.

Table 4:

Patient and intervention overview for great catches (n=162).

Number of patients, %
Patient demographics
 Median age, years 4.7; IQR: 0.4–12
 Male 92, 57
Treatment team
 General medical teams 48, 30
 Surgical services 29, 18
 NICU 27, 17
 PICU 18, 11
 Heme/Onc 18, 11
 CICU/CPCU 11, 7
 Emergency department 10, 6
 Outpatient primary care 1, 1
Intervention overview
 Therapeutic interventions 137, 85
 Diagnostic interventions 65, 40
 Epidemiologic interventions 11, 7
 Intervention(s) “accepted” by team 158, 98
ID consultation overview
 ID consult recommended 39, 24
 ID consult within three days of intervention 27/39, 69

Many of the GCs were assigned more than one intervention category (therapeutic, diagnostic, and epidemiologic). Most GCs involved therapeutic interventions only (91/162, 56%), while 12% (19/162) provided diagnostic recommendations only; 28% of GCs (46/162) included both therapeutic and diagnostic recommendations to primary teams (Table 4). A small group of interventions (11/162, 7%) had larger epidemiologic implications (Table 2, 4). Most HS-ASP recommendations were implemented by the treatment teams (158/162, 98%). In addition to these therapeutic, diagnostic, and epidemiologic recommendations, the HS-ASP team also recommended obtaining formal infectious disease (ID) consultation in 39/162 (24%) of GC interventions and, when ID consultation was recommended, the team consulted ID 69% of the time (Table 4).

After primary review, 19 (12%) GC cases had a calculated overall Safer Dx Score of ≤1.50, indicating presence of diagnostic error (Table 5). Demographic information, including patient age and treatment team, was similar for GCs with and without diagnostic errors. Of those GCs associated with a diagnostic error, 95% included at least one diagnostic recommendation from the HS-ASP team, whereas among GCs without diagnostic error (Safer Dx Score >1.50), only 33% had a diagnostic recommendation from HS-ASP, p<0.001 (Table 5). Of those GCs with Safer Dx Score ≤1.50, 17 (89%) had an answer of “yes” for question 12. Among patients who met criteria for a diagnostic error, treatment teams accepted HS-ASP recommendations in 18/19 (95%) with a Safer Dx Score ≤1.50 and 16/17 (94%) with an answer of “yes” for question 12. There were no GCs with a calculated Safer Dx score >1.50 for which the reviewer answered “yes” to question 12.

Table 5:

Great catch diagnostic errors.

Number of patients, %
Safer Dx error score
 ≤1.50 19/162, 12
 1.51–1.89 2/162, 1
 ≥1.90 141/162, 87
Patients with diagnostic error, (safer Dx ≤1.50)
 Yes to question 12 17/19a, 89
 No to question 12 2/19, 11
 Diagnostic ASP intervention 18/19, 95

  1. a0/162 patients had both “Yes” to question 12 and safer Dx error score >1.50.

Of the 55 cases randomly selected to assess IRR, Reviewer #1 scored 10 cases ≤1.50 and Reviewer #2 scored 15 cases ≤1.50. For dichotomized question 12, Reviewer #1 responded “yes the case involved a diagnostic error” for 9 cases, and Reviewer #2 responded “yes” for 16 cases (Table 6). There was 80% agreement between the two reviewers for both Safer Dx scores and dichotomized question 12 (yes/no). The kappa coefficient for IRR was 0.44 (95% CI 0.16–0.71) for the Safer Dx scores and 0.44 (95% CI 0.18–0.71) for the dichotomized question 12 (yes/no), indicating moderate agreement.

Table 6:

Agreement for great catch scoring.

Reviewer scores for overall safer Dx score (n=55)
Reviewer #2
≤1.50 >1.50

Reviewer #1
≤1.50 7 3

>1.50

8

37

Reviewer responses for question 12 (n=55)


Reviewer #2

“Yes”

“No”
Reviewer #1 “Yes” 7 2
“No” 9 37

Discussion

In this study of notable HS-ASP interventions, 12% of GCs involved a diagnostic error using the previously validated Safer Dx Instrument with moderate agreement between two reviewers (K=0.44, agreement 80%). Nearly all GCs associated with diagnostic error had a diagnostic recommendation made by the HS-ASP team suggesting an opportunity to correct a misdiagnosis or identify a missed diagnosis. Our findings indicate that, in addition to preventing a wide range of potential adverse events related to antimicrobial use, the HS-ASP model may also identify and intervene on diagnostic errors.

In the course of routine care, the diagnostic process occurs longitudinally, with information provided in a stepwise fashion. A patient arrives seeking an explanation of their health problem, initial vital signs are recorded, a history and physical exam are performed, and preliminary diagnostic studies are obtained with results returning at intervals frequently asynchronous with the patient encounter [17]. During this process, a patient’s clinical trajectory may evolve over time and the initial presentation may not offer sufficient distinguishing features to make the correct diagnosis [18]. This cumulative and sequential information gathering inherent to the diagnostic process can influence the final diagnosis through well-described heuristic failures like premature closure, anchoring, or confirmation bias [19], [20], [21]. By design, HS-ASP may interrupt these cognitive miscues by offering a “compressed second look” of objective and subjective data for patients at two checkpoints (24 and 72 h after starting an antimicrobial). Through individual, directed chart reviews at these time points, the HS-ASP steward obtains a “bird’s-eye view” of the clinical data for a given patient, including the evolution of the disease over a brief time period and can offer a unique perspective on the diagnostic process. This perspective is communicated by in-person conversation with frontline providers allowing for diagnostic error recognition and intervention.

Decreased face-to-face communication between providers contributes to diagnostic errors as providers increasingly practice in diagnostic isolation. As described by Graber et al., “The EHR has become the de facto norm for communication in health care, leading each member of the team to work independently, in their own silo … the face-to-face communication that was once the norm in the course of a diagnostic evaluation has been replaced by opaque orders and formulaic reports, both of which lack the rich detail that was inherent when providers talked with each other” [22]. In-person communication with frontline providers is an important and distinguishing feature of the HS-ASP model [11], [12], [13], [23]. Face-to-face communication by HS-ASP allows for nuanced and reciprocal conversations that may promote recognition and intervention before diagnostic errors harm patients.

Improving teamwork in the diagnostic process is one strategy recommended by the National Academy of Medicine to prevent errors [17]. A “collective intelligence approach” was recently shown to improve diagnostic accuracy compared to individual physicians, and facilitating access to second opinions has been identified as a means to prevent diagnostic error [24], [25], [26]. In an effort to expedite teamwork and improve diagnostic accuracy, several institutions have implemented so-called “diagnostic management teams (DMTs)” consisting of expert diagnosticians (pathologists, radiologists, microbiologists, etc.) that proactively review cases and offer guidance for a tailored diagnostic approach for individual patients [22], [27], [28]. These teams have demonstrated early promise for improving diagnostic accuracy and decreasing errors. Our findings support the claim that HS-ASP may function as a contributor to such an “expanded diagnostic team” [22].

HS-ASP reviews charts only for those patients receiving antimicrobials, and there are likely patients with diagnostic errors neither identified nor reviewed by HS-ASP. However, 50–65% of all children admitted to a pediatric hospital receive antimicrobials, and during this study period, HS-ASP intervened on 19% of all admissions in which antimicrobials were prescribed [13], [29]. This suggests that HS-ASP represents a significant opportunity to mitigate harm from diagnostic errors for many patients across multiple clinical units. In addition to these individual patient-level diagnostic errors, our study also suggests that HS-ASP may play a role in larger hospital-wide diagnostic errors. For example, several of the GCs intervened on epidemiologic concerns that had been unrecognized by providers. One such example involved several concurrent cases of Enterobacter cloacae infection in the CHCO neonatal intensive care unit that were not recognized as an outbreak until the HS-ASP team noted these similar infections in a short time period. HS-ASP has a longitudinal presence on every inpatient unit at CHCO, offering a macroscopic and real-time assessment of active hospital-wide infectious concerns.

There are several limitations to our study. Our study subjectively relies on retrospective review of EHR documentation which limits the ability to evaluate a provider’s reasoning during the diagnostic process; for example, further work-up was planned but enacting such evaluation required the results of initial testing. Prior studies have identified low interrater reliability when determining presence of diagnostic error in retrospectively reviewed cases [8], [16], [30], [31], [32]. Our use of a validated scoring system and two blinded reviewers attempted to offset this inherent subjectivity. In addition, since this project ended, a new version of the Safer Dx instrument has been published, and our findings may have been limited by using the original Safer Dx instrument instead of the improved version of the questionnaire [16]. Lastly, the HS-ASP model was developed at CHCO by providers dedicated to its success. Therefore, our results of HS-ASP as a model to recognize and prevent diagnostic errors may not be generalizable to other institutions, even those with a similar HS-ASP format. Future prospective evaluations are needed to evaluate the HS-ASP model to study whether it can serve a valuable role on the “diagnostic team” and what, if any, unintended consequences arise from HS-ASP diagnostic recommendations.

Despite these limitations, our findings demonstrate potential for HS-ASP to identify and intervene on diagnostic errors among hospitalized children. As argued by Singh, health care organizations should “choose at least one diagnostic error detection strategy to augment their existing safety and/or risk management programs” [18]. Our findings show that HS-ASP can serve as a model to intervene proactively on diagnostic errors and could be one such strategy employed by health care organizations to make a significant contribution to improve care.


Corresponding author: Justin B. Searns, MD, Divisions of Hospital Medicine & Infectious Diseases, Department of Pediatrics, Children’s Hospital Colorado, University of Colorado, 13123 E 16th Ave, B302, Aurora, CO 80045, USA, Phone: +1 720 777 5070, E-mail:

  1. Research funding: None declared.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

  4. Informed consent: Informed consent was waived for this retrospective review.

  5. Ethical approval: This project was approved by the Children’s Hospital Colorado Organizational Research Risk and Quality Improvement Review Panel.

References

1. Okafor, N, Payne, VL, Chathampally, Y, Miller, S, Doshi, P, Singh, H. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine. Emerg Med J 2016;33:245–52. https://doi.org/10.1136/emermed-2014-204604.Search in Google Scholar

2. Kohn, LT, Corrigan, J, Donaldson, MS. To err is human: building a safer health system. Washington, DC: National Academy Press; 2000, vol xxi:287.Search in Google Scholar

3. Singh, H, Thomas, EJ, Wilson, L, Kelly, PA, Pietz, K, Elkeeb, D, et al. Errors of diagnosis in pediatric practice: a multisite survey. Pediatrics 2010;126:70–9. https://doi.org/10.1542/peds.2009-3218.Search in Google Scholar

4. Rinke, ML, Singh, H, Ruberman, S, Adelman, J, Choi, SJ, O’Donnell, H, et al. Primary care pediatricians’ interest in diagnostic error reduction. Diagnosis (Berl) 2016;3:65–9. https://doi.org/10.1515/dx-2015-0033.Search in Google Scholar

5. Grubenhoff, JA, Ziniel, SI, Cifra, CL, Singhal, G, McClead, RE, Singh, H. Pediatric clinician comfort discussing diagnostic errors for improving patient safety: a survey. Pediatr Qual Saf 2020;5:e259. https://doi.org/10.1097/pq9.0000000000000259.Search in Google Scholar

6. Newman-Toker, DE, Schaffer, AC, Yu-Moe, CW, Nassery, N, Saber Tehrani, AS, Clemens, GD, et al. Serious misdiagnosis-related harms in malpractice claims: the “big three”-vascular events, infections, and cancers. Diagnosis (Berl) 2019;6:227–40. https://doi.org/10.1515/dx-2019-0019.Search in Google Scholar

7. Al-Mutairi, A, Meyer, AN, Thomas, EJ, Etchegaray, JM, Roy, KM, Davalos, MC, et al. Accuracy of the safer Dx instrument to identify diagnostic errors in primary care. J Gen Intern Med 2016;31:602–8. https://doi.org/10.1007/s11606-016-3601-x.Search in Google Scholar

8. Schiff, GD, Hasan, O, Kim, S, Abrams, R, Cosby, K, Lambert, BL, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009;169:1881–7. https://doi.org/10.1001/archinternmed.2009.333.Search in Google Scholar

9. Graber, ML, Franklin, N, Gordon, R. Diagnostic error in internal medicine. Arch Intern Med 2005;165:1493–9. https://doi.org/10.1001/archinte.165.13.1493.Search in Google Scholar

10. Adams, JG, Bohan, JS. System contributions to error. Acad Emerg Med 2000;7:1189–93. https://doi.org/10.1111/j.1553-2712.2000.tb00463.x.Search in Google Scholar

11. Hurst, AL, Child, J, Pearce, K, Palmer, C, Todd, JK, Parker, SK. Handshake stewardship: a highly effective rounding-based antimicrobial optimization service. Pediatr Infect Dis J 2016;35:1104–10. https://doi.org/10.1097/inf.0000000000001245.Search in Google Scholar

12. Messacar, K, Campbell, K, Pearce, K, Pyle, L, Hurst, AL, Child, J, et al. A handshake from antimicrobial stewardship opens doors for infectious disease consultations. Clin Infect Dis 2017;64:1449–52. https://doi.org/10.1093/cid/cix139.Search in Google Scholar

13. MacBrayne, CE, Williams, MC, Levek, C, Child, J, Pearce, K, Birkholz, M, et al. Sustainability of handshake stewardship: extending a hand is effective years later. Clin Infect Dis 2020;70:2325–32. https://doi.org/10.1093/cid/ciz650.Search in Google Scholar

14. Davalos, MC, Samuels, K, Meyer, AN, Thammasitboon, S, Sur, M, Roy, K, et al. Finding diagnostic errors in children admitted to the PICU. Pediatr Crit Care Med 2017;18:265–71. https://doi.org/10.1097/pcc.0000000000001059.Search in Google Scholar

15. Hong, H, Choi, Y, Hahn, S, Park, SK, Park, BJ. Nomogram for sample size calculation on a straightforward basis for the kappa statistic. Ann Epidemiol 2014;24:673–80. https://doi.org/10.1016/j.annepidem.2014.06.097.Search in Google Scholar

16. Singh, H, Khanna, A, Spitzmueller, C, Meyer, AND. Recommendations for using the revised safer Dx instrument to help measure and improve diagnostic safety. Diagnosis (Berl) 2019;6:315–23. https://doi.org/10.1515/dx-2019-0012.Search in Google Scholar

17. Ball, JR, Balogh, E. Improving diagnosis in health care: highlights of a report from the National Academies of Sciences, Engineering, and Medicine. Ann Intern Med 2016;164:59–61. https://doi.org/10.7326/m15-2256.Search in Google Scholar

18. Singh, H. Editorial: helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Joint Comm J Qual Patient Saf 2014;40:99–101. https://doi.org/10.1016/s1553-7250(14)40012-6.Search in Google Scholar

19. Croskerry, P. Adaptive expertise in medical decision making. Med Teach 2018;40:803–8. https://doi.org/10.1080/0142159x.2018.1484898.Search in Google Scholar

20. Croskerry, P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003;78:775–80. https://doi.org/10.1097/00001888-200308000-00003.Search in Google Scholar

21. Croskerry, P, Petrie, DA, Reilly, JB, Tait, G. Deciding about fast and slow decisions. Acad Med 2014;89:197–200. https://doi.org/10.1097/acm.0000000000000121.Search in Google Scholar

22. Graber, ML, Rusz, D, Jones, ML, Farm-Franks, D, Jones, B, Cyr Gluck, J, et al. The new diagnostic team. Diagnosis (Berl) 2017;4:225–38. https://doi.org/10.1515/dx-2017-0022.Search in Google Scholar

23. Hurst, AL, Child, J, Parker, SK. Intervention and acceptance rates support handshake-stewardship strategy. J Pediatr Infect Dis Soc 2019;8:162–5. https://doi.org/10.1093/jpids/piy054.Search in Google Scholar

24. Perrem, LM, Fanshawe, TR, Sharif, F, Pluddemann, A, O’Neill, MB. A national physician survey of diagnostic error in paediatrics. Eur J Pediatr 2016;175:1387–92. https://doi.org/10.1007/s00431-016-2772-0.Search in Google Scholar

25. Graber, ML, Kissam, S, Payne, VL, Meyer, AN, Sorensen, A, Lenfestey, N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf 2012;21:535–57. https://doi.org/10.1136/bmjqs-2011-000149.Search in Google Scholar

26. Barnett, ML, Boddupalli, D, Nundy, S, Bates, DW. Comparative accuracy of diagnosis by collective intelligence of multiple physicians vs. individual physicians. JAMA Netw Open 2019;2:e190096. https://doi.org/10.1001/jamanetworkopen.2019.0096.Search in Google Scholar

27. Verna, R, Velazquez, AB, Laposata, M. Reducing diagnostic errors worldwide through diagnostic management teams. Ann Lab Med 2019;39:121–4. https://doi.org/10.3343/alm.2019.39.2.121.Search in Google Scholar

28. Laposata, M. Errors in clinical laboratory test selection and result interpretation: commonly unrecognized mistakes as a cause of poor patient outcome. Diagnosis (Berl) 2014;1:85–7. https://doi.org/10.1515/dx-2013-0010.Search in Google Scholar

29. Gerber, JS, Newland, JG, Coffin, SE, Hall, M, Thurm, C, Prasad, PA, et al. Variability in antibiotic use at children’s hospitals. Pediatrics 2010;126:1067–73. https://doi.org/10.1542/peds.2010-1275.Search in Google Scholar

30. Localio, AR, Weaver, SL, Landis, JR, Lawthers, AG, Brenhan, TA, Hebert, L, et al. Identifying adverse events caused by medical care: degree of physician agreement in a retrospective chart review. Ann Intern Med 1996;125:457–64. https://doi.org/10.7326/0003-4819-125-6-199609150-00005.Search in Google Scholar

31. Thomas, EJ, Lipsitz, SR, Studdert, DM, Brennan, TA. The reliability of medical record review for estimating adverse event rates. Ann Intern Med 2002;136:812–6. https://doi.org/10.7326/0003-4819-136-11-200206040-00009.Search in Google Scholar

32. Hayward, RA, Hofer, TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA 2001;286:415–20. https://doi.org/10.1001/jama.286.4.415.Search in Google Scholar

Received: 2020-03-05
Accepted: 2020-09-17
Published Online: 2020-10-28
Published in Print: 2021-08-26

© 2020 Justin B. Searns et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.