Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Diagnosis

Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / McDonald, Kathryn / Singh, Hardeep

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Newman-Toker, David / Singhal, Geeta / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura

4 Issues per year

Online
ISSN
2194-802X
See all formats and pricing
More options …

Clinical criteria to screen for inpatient diagnostic errors: a scoping review

Edna C. Shenvi MD, MAS
  • Corresponding author
  • Division of Biomedical Informatics, University of California, San Diego, 9500 Gilman Dr. MC 0728, La Jolla, CA 92093-0728, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Robert El-Kareh MD, MS, MPH
  • Divisions of Biomedical Informatics and Hospital Medicine, Department of Medicine, University of California, San Diego, La Jolla, CA, USA
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2014-10-18 | DOI: https://doi.org/10.1515/dx-2014-0047

Abstract

Diagnostic errors are common and costly, but difficult to detect. “Trigger” tools have promise to facilitate detection, but have not been applied specifically for inpatient diagnostic error. We performed a scoping review to collate all individual “trigger” criteria that have been developed or validated that may indicate that an inpatient diagnostic error has occurred. We searched three databases and screened 8568 titles and abstracts to ultimately include 33 articles. We also developed a conceptual framework of diagnostic error outcomes using real clinical scenarios, and used it to categorize the extracted criteria. Of the multiple criteria we found related to inpatient diagnostic error and amenable to automated detection, the most common were death, transfer to a higher level of care, arrest or “code”, and prolonged length of hospital stay. Several others, such as abrupt stoppage of multiple medications or change in procedure, may also be useful. Validation for general adverse event detection was done in 15 studies, but only one performed validation for diagnostic error specifically. Automated detection was used in only two studies. These criteria may be useful for developing diagnostic error detection tools.

This article offers supplementary material which is provided at the end of the article.

Keywords: adverse event detection; diagnostic error detection; trigger tools

Introduction

Diagnostic error can be defined as a wrong, missed, or delayed diagnosis [1] and is a cause of significant healthcare harm that is largely preventable [2]. One estimate attributed diagnostic error for causing 40,000–80,000 deaths in the US annually in the inpatient setting alone, [3] and errors of diagnosis are the most common [4] and the most lethal [5] kind of professional liability claim. One in 20 US adults in the outpatient setting is estimated to be affected by a diagnostic error, [6] and about half of these errors are considered to be potentially harmful. While patient safety has become an increasingly high priority nationwide, diagnostic error has largely been overshadowed by efforts to reduce other kinds of harm, such as medication errors and nosocomial infections, and this may be due in part to the difficulty in measuring and analyzing diagnostic errors accurately.

Voluntary reporting and autopsies are some of the multiple possible approaches used to research diagnostic error, [7] but all have significant limitations. Retrospective chart review is often the best option, but this method is time-consuming and costly. Such review efforts have been facilitated by two-stage review processes, in which a nurse or other non-physician first reviews a chart for any among a list of screening criteria or “triggers”, such as an inpatient death or transfer to an intensive care unit (ICU), and those records that screen positive for a criterion are then reviewed by a physician to evaluate for the presence of an adverse event (AE). This method was first reported in the California Medical Insurance Feasibility Study (MIFS), [8, 9] adapted for the landmark Harvard Medical Practice Study (HMPS), [10–12] and similar studies in other countries [13–17], and influenced development of the “Global Trigger Tool” (GTT), [18] the most commonly employed such tool today.

None of these studies focused specifically on diagnostic errors, but usage of trigger tools has significant potential to improve the study of diagnostic error, [7] as this method can enrich the yield of charts reviewed and some can be applied with automated screening. Trigger tools for diagnostic error have been employed successfully in the outpatient setting using criteria such as an unscheduled hospitalization within 2 weeks of a primary care visit [19]. No such studies have been done specifically evaluating general triggers of diagnostic error in the inpatient setting. Study of diagnostic error has previously been focused in areas of high risk such as missed cancer in outpatient settings, and in the emergency department with high degrees of uncertainty and time pressure with undifferentiated patients. However, as these numerous studies on adverse events have demonstrated the harm and preventability of diagnostic error in hospitals, we sought to identify potential triggers that have been reported in research literature that could be used to screen for diagnosis-related errors in hospitalized adult patients, and specifically those that would be amenable to automated detection using data available in an electronic health record (EHR).

Methods

Developing a search strategy

We first compiled a test set of 12 articles by searching references in review articles and looking for related references through PubMed. From these articles we extracted keywords and medical subject headings (MeSH) terms, and evaluated iterations on our searches by success in retrieving the test set citations. Searches were then adapted for the syntax of Web of Science and CINAHL, with additional queries for those articles in PubMed not yet indexed with MeSH terms, and for those in CINAHL not yet similarly indexed. Keywords included “adverse events”, “diagnostic errors”, “detect”, and “identify”, and MeSH headings included “Medical Errors”, “Medical Audit”, and “Risk management/methods”. We limited our search to those published in the English language. A full description of the search strategy is available in the Supplementary Material.

Article selection

The final database searching in July 2013 of PubMed, Web of Science, and CINAHL retrieved 8861 references. Removing duplicates yielded 8558 unique citations. Both authors screened titles and abstracts, and identified 146 for full-text review. Disagreements were discussed between the two reviewers in order to come to a consensus. We also searched references cited in both research and review publications; this citation tracking yielded an additional five articles for inclusion, for a total of 33 included studies. Article selection is summarized in Figure 1.

Article selection.
Figure 1

Article selection.

We included only those articles that developed or validated criteria indicating that an error may have occurred. Some studies validated an entire tool, such as the GTT, but not individual criteria, so these were not included. Error-detection studies that referred to other articles for their methods or criteria used for their study were also excluded, although we searched these citations as well. Because our study focused on criteria applied to adult medical inpatients, we also excluded studies that were only applied to outpatients, emergency departments, pediatrics, or other specialties.

Framework of outcomes

We had a goal of categorizing potentially measurable trigger criteria that we would find in the literature search. As we did not find an appropriate system to use, we underwent an iterative process of creating a framework for this purpose. After preliminary discussion with clinical experts about the manifestations of diagnostic errors, we identified categories of “signals” related to patient status, clinical assessments (e.g., a diagnosis itself), and clinical actions (e.g., starting a treatment or making other management decisions). We used four appropriate clinical cases to help inform further development, and validated it with an additional four clinical case reports. As we reviewed the potential criteria published in the literature, we refined categorization in an iterative process. A depiction of this categorization is in Table 1.

Table 1

Framework of outcomes of inpatient diagnostic error.

Data extraction and analysis

One author [ES] primarily did the full-text review and data extraction, with guidance and revision by the other author [RE]. From each study, we extracted the research objectives and setting, presence of validation and automation, and also whether the authors reported that their methods detected “diagnostic error”, its incidence, and how such error was defined. A summary of included studies is in Table 2.

Table 2

Characteristics of included studies.

We extracted all record screening criteria from the included articles. Determination of whether or not such criteria were likely to be useful triggers for inpatient diagnostic error was done by comparison with our framework of potential outcomes. A summary of extracted criteria is in Table 3, grouped by category and ordered as presented in Table 1. An expanded version of all criteria reported is available in the Supplemental Material, Table 2.

Table 3

Signals of potential inpatient diagnostic error.

Results

Criteria associated with diagnostic error

Significant clinical deterioration: death, arrest or code, and resultant clinical management

Error that results in harm would be those of highest priority to be prevented, which may manifest as some kind of clinical deterioration due to lack of management appropriate to the patient’s true condition. The worst of such deterioration would result in a patient death, or near death, such as a cardiorespiratory arrest and the necessity for a “code team” or “rapid response team” to resuscitate the patient. Any inpatient “death” therefore would be a trigger criterion to prompt review for a diagnostic error, or for AEs in general. One or more of these three concepts (death, cardiorespiratory arrest, or medical emergency team response) was present in all studies. Some authors modified them to increase their specificity for error, such as by limiting deaths only to “unexpected death”, or “death unrelated to natural course of illness and differing from immediate expected outcome of patient management” [31]. One study [21] that was not attempting to measure incidence but to perform qualitative analysis of care prior to AEs only examined deaths subsequent to a code team call or ICU transfer. These criteria would all function differently compared with a categorical “death” criterion, although the concept is essentially the same.

Cardiorespiratory arrest was a separate criterion from death in most studies. To increase specificity of this criterion in the MIFS, reviewers were instructed to not count cardiorespiratory arrest as a positive screen in patients who were admitted for planned terminal care, [8] and Pavão [24] distinguishes this from “Death” by specifying only “reversed” cardiorespiratory arrest. It was linked with other “serious intervening event(s)” including deep vein thrombosis, pressure sore, and neurological events in two studies [32, 40]. Activation of a code team in response to such an arrest is a clinical action that would function very similarly as trigger, even though cardiorespiratory arrest itself is referring to the status of the patient. The three studies [22, 31, 37] that only listed an emergency team activation as a criterion (code, medical emergency, or rapid response team) did not have a separate cardiac arrest trigger. The GTT tool as developed in early studies [18, 27] has as one trigger “any code or arrest”, adding “rapid response team activation” in a later study, [22] but unlike others, does not have “Death” as a unique trigger separate from these clinical events except for intra- or post-operative death. Variations across hospitals for what clinical scenarios prompt such activations are likely to affect broad applicability of these criteria.

Transfer to an intensive care unit or other increased level of care was present in 29 of the studies. Several specified that such a transfer had to be unexpected for the record to screen positive. Names for units varied, with “intensive”, “semi-intensive”, “acute”, and “special” care units all being terms that were used, although only one [36] distinguished intensive and intermediate care transfer as two separate criteria. This criterion was broadened to any “transfer to a higher level of care” in four studies [18, 22, 28, 35]. Three of these also included readmission to the ICU, [18, 22, 35] which could be a manifestation of deterioration in one missed disease process while another may have improved, but could be regarded as an overlap with any increased care acuity.

Other deterioration (as depicted in Table 3) may result in intubation after admission, new dialysis, or medical management changing to emergent surgery, resulting in an unexpected procedure or visit to the OR. We also grouped change of code status in this category, such as a patient being full code but then deteriorating to the point of receiving a “do not resuscitate” order. However, these four may not necessarily be the outcome of deterioration but may merely reflect a change in management plan, which is a category discussed below.

Unexpected time course of illness

Each condition has an expected range of potential courses, and some criteria aim to detect those courses that deviate from what was expected. Diagnostic error may manifest after discharge, such as lack of improvement necessitating return to healthcare or readmission within a certain time period. Many studies used readmission within a threshold number of days, or specifically a readmission because of the care provided in the previous admission [33].

Diagnostic error may also prolong the hospital course, with or without deterioration or changes in clinical management. This concept was present in 10 of the studies, although quantified in different ways: using threshold values (e.g., length of stay [LOS] >35, 21, or 10 days), comparison with an “expected” duration at admission, or comparison with average duration for diagnosis-related group (DRG). The HMPS [10] used different percentiles for DRG based on patient age. Craddick’s Medical Management Analysis (MMA) program [48] was a system intended to be tailored to each hospital, and this system left it to each institution’s discretion to set its own threshold of either LOS or a percentile. Specific thresholds are less broadly applicable across geographic and care settings, as Kobayashi et al report [30] that the average length of hospital stay in Japan in 2004 was 22.2 days, much longer than in other countries. This makes threshold values for length of stay unable to be applied in Japan as they are elsewhere. The MIFS group [8] attempted to improve the specificity of this criterion by instructing reviewers to exclude those prolongations of hospitalization that were only for administrative or social reasons.

Change of management team

Recognition of a diagnostic error could result in a patient transfer to another hospital, which was a criterion included in 23 studies. Two [8, 24] include exceptions for those transfers that are for exams or procedures unavailable at the first hospital and those that are mandatory for administrative reasons. This may be of questionable utility in urban tertiary care academic medical centers, but useful for quality review purposes in smaller hospitals, in which it may occur that a patient’s failure to improve results in a decision to procure assistance from a larger or more specialized institution.

Correction of error could also result in a patient transfer to another service, such as from general medicine to cardiology, without a concomitant increase in acuity. Resar and colleagues [35] in the ICU trigger tool included change of physician in charge as a criterion for detection of potential error. While physician changes happen for multiple reasons besides diagnostic error, construction of a tool to detect change of management team may be a useful screen.

Change of specific management plan

Responding to or correcting a diagnostic error could result in a change in the specific treatment plan for a patient. “Abrupt medication stop” was a criterion in five studies; one [23] specified this for an enterprise data warehouse query as “discontinuation of 4 of more medications in a 6 h period >48 h after admission and at least 24 h prior to discharge”.

Changes in plan regarding a surgery or other procedure could manifest in three different ways, and all were present in at least one study. A patient with a wrong diagnosis could be booked for surgery and then changed to medical management (i.e., cancellation of a planned procedure), or have a pre-operative diagnosis and planned procedure that differs significantly from a post-operative diagnosis and actual procedure performed (i.e., change in procedure). Alternatively, an unplanned visit to the operating room or other procedural facility, as discussed under deterioration, could also occur after a patient receives medical therapy only to eventually undergo operative management.

Change in diagnosis itself

Recognition of an incorrect diagnosis could reasonably result in a modification of the patient’s primary or secondary problems in the medical record. The only criterion we encountered that dealt with a diagnosis was a pathology result either normal or unrelated to the previous diagnosis, [18, 27] which we grouped with criteria related to procedures since this would be the mode of acquiring a specimen, and the comparison would be made to a pre-procedure diagnosis. Other criteria related to diagnoses, as listed in our “Indicators of Clinical assessment” column in the framework, were not present in these studies. Substantial change in the problem list might be a useful screen for diagnostic error, although the performance and potential for automation would vary by EHR design and on how well the problem list is maintained.

Diagnostic uncertainty

Identification of cases in which the correct diagnosis was unclear to the medical team would be useful for review for patient safety and educational purposes. Direct measurement of instances of diagnostic uncertainty would be difficult; however, there are possibly ways to detect indirect manifestations. We found only one published criterion in this category. Resar’s ICU trigger tool [35] used the criterion of multiple consultations, using the threshold of three or more. Requesting input from multiple specialties could be a manifestation of either diagnostic uncertainty or delay.

Other criteria reported likely to be associated with diagnostic error

Other criteria we encountered may be associated with diagnostic error, but are unlikely to be amenable to automated detection or available in an EHR. Two studies [31, 32] used delay or error in diagnosis as a screening criterion itself, which would require significant clinician interpretation in order to use. Others, such as patient and provider dissatisfaction, litigation, or ethics board referrals, are not likely to be in an EHR.

Criteria for non-diagnostic error

The criteria we regarded as not associated with diagnostic error were those developed to detect other types of adverse events, such as those specific for nosocomial infections (positive blood or urine cultures), adverse drug events (administration of “rescue” medications, like naloxone or vitamin K), and procedural complications (return to the operating room or injury of an organ during a procedure). Additional triggers were for in-hospital falls, venous thromboembolic events, strokes, pressure ulcers, and bleeding. Since the majority of these studies employed manual screening, many also had a vague “catch-all” category that is not translatable to an electronic trigger; two such criteria are “any other undesirable outcome not covered above” [10] and “other finding on chart review suggestive of an adverse event” [44]. A full list of these criteria we considered not primarily associated with electronic detection of diagnostic error, as well as example studies in which each was used, is available in the Supplementary Material, Table 2.

Diagnostic error, validation and automation

Thirteen of the studies reported the proportion of adverse events involving diagnostic error, while the other studies had varied research objectives, mostly determining severity and preventability of adverse events rather than other categorizations. In the studies that reported results for diagnostic errors, the proportion of AEs involving diagnostic errors was as low as 5.1% in one study [33] to 67.5% in another [37] (discussed below). These proportions are difficult to compare given variation in the definitions of their study populations (denominators).

Fifteen studies reported validation metrics for their criteria, as positive predictive values or odds ratios for individual criteria, or kappa coefficients for inter-rater agreement during manual review. However, these are validation metrics for AEs in general and not specific to diagnosis. Only one study [37] provides statistics for which validation in detecting diagnostic error can be inferred. This study used only the single criterion of medical emergency team referral, and found that 31.3% were associated with medical errors, of which 67.5% were determined to be diagnostic, although it used a broader definition of diagnostic error that includes failure to perform a test or act on known results (as in Table 2). Given the variability of definitions of error and care practices, validation metrics cannot be aggregated across the studies at this time.

Only two studies used automated methods: one compared automated with traditional manual triggers [23] and another electronically screened discharge summary text [41]. These did not report proportion of errors that were diagnostic in nature. Bates et al. [46] described that many AEs could be detected by computer systems even with low levels of sophistication, but even in the two decades since publication, this has rarely been done or reported in research literature for AE detection that includes diagnostic error.

Discussion

Criteria in outcomes framework and potential for automated detection

Five of the criteria used in much of the original work in AE detection can be associated with inpatient diagnostic error: death, cardiorespiratory arrest (or code), transfer to a higher level of care, long length of stay, and transfer to another hospital. Subsequent trigger tools added other indicators: calling multiple consults, change of physician in charge, changes in procedure, abrupt medication stop, and discordance between diagnosis and pathology result. We also included change of code status, intubation, and new dialysis as those with reasonable association with diagnostic error.

In current EHR systems, it may be possible to use multiple changes in active diagnoses as triggers for detecting diagnostic error, as we depict in our model. A change of organ system for a primary diagnosis, such as pancreatitis to myocardial infarction, for example, could be a reasonable trigger, as well as addition of or significant changes to secondary diagnoses. We also considered some indicators of diagnostic uncertainty may be useful, such as a symptom or finding-based primary diagnosis rather than a true diagnosis, and multiple diagnostic procedures at a certain point in hospitalization, similar to multiple consults. However, we did not find their use reported in either manual or automated methods.

None of these criteria is specific to diagnostic error, and many may be difficult to translate into an automated query. A delay in diagnosis or initiation of effective treatment, which are triggers in three studies [31, 32, 40], would need to be more explicitly defined to use in automated detection. One possible way to concretize the concept of treatment delay could be usage of a threshold number of hospital day for first surgery, similar to using thresholds for long length of stay, such as first major procedure on or after hospital day 3. We encountered this concept in the Complications Screening Program, [49] although it was only used for risk stratification and was not a trigger in itself, therefore not included in our table. A 2-day delay from admission to surgery, however, might also be a manifestation of delay in diagnosis, and may warrant further evaluation as a potential screen for diagnostic error.

Trigger tools can be used for both retrospective detection of errors, or real-time activation of some kind of intervention to prevent imminent or potential harm. Some criteria are by nature exclusively retrospective, such as a patient death, but others may indicate uncertainty or error, which in real time could possibly be used to trigger special attention to a particular clinical case. The studies we included focused on retrospective error detection, but some may have potential for real-time prevention of harm as well, such as multiple consultations or change in multiple medications, that could prompt additional attention to a particular case. We anticipate that new trigger criteria for both purposes could be developed in the future, and we would hope that the conceptual framework we developed on the manifestations of diagnostic error would provide a structure around which new criteria and formalized understanding of this area can be organized.

Identifying those triggers that will be the most useful for detecting diagnostic error will require much further study. Positive predictive values, if reported in studies as validation metrics, were for general error detection and not specific for diagnostic error. Eleven of these studies reported proportions of errors that were diagnostic, but definitions of what constituted this type of error varied. Both Mills [8] and Soop [29] further subcategorized wrong and delayed diagnoses while some studies grouped these two together; however, a few of the definitions may be more broad than generally used. The inclusion of failure to order tests or misinterpretation of results used in two studies [37, 38], and the grouping of diagnostic with treatment delay in another [36] demonstrates the variability in defining diagnostic error, but also illustrates some of the multiple ways it can manifest. For trigger criteria to be more useful, it would be necessary to establish a consensus on the definition of error. Such a consensus would allow comparison of the performance of different criteria across multiple settings.

Limitations

Our objective was to compile available trigger criteria reported in the literature that may be useful in development of screening tools to detect inpatient diagnostic error. As this was a difficult concept to clearly and exhaustively query in research databases, even our iterative approach in finalizing a search strategy gave us a rather low yield of articles for the number of original citations retrieved. While we had two reviewers screening citations, only one primarily did the extraction. We also only reviewed English language publications, and only research on adult medical patients. It is possible that these limitations may have resulted in missing a few more such screening criteria, although many of the studies we found repeated similar criteria and cited similar sources, causing us to conclude that our citation tracking was sufficient and a more extensive literature search was of low likelihood to yield many more new criteria. Additionally, our framework of outcomes was based on a small corpus of case reports; further validation with more cases is warranted.

Determining association of specific criteria with diagnostic error was also not straightforward. While it could be argued that any AE may have some component of diagnosis or assessment involved, we focused on criteria for which the primary measured item was potentially directly related to a diagnostic assessment. Both our conceptual framework and our compiled criteria are of a global or convergent picture of diagnostic error, rather than a comprehensive tool for detecting all possible diagnoses that could be missed or wrong.

Conclusions

Inpatient diagnostic errors may be more easily detected and studied using available “triggers” to facilitate chart review. We have identified several such criteria that could be used to develop automated screening tools for detection of diagnostic errors using available data within electronic health records. In addition, we developed a preliminary conceptual framework of outcomes to formalize the manifestations of inpatient diagnostic error that we hope will be helpful and expanded in the future with further study in this area. We identified some additional criteria that may also be useful, but have not been used in either manual or automated methods to date. Validation of these criteria is needed to identify those that will provide the most effective screen for inpatient diagnostic error.

Acknowledgments

We would like to acknowledge the guidance and assistance in the development of our search strategy by Mary Wickline, Nancy Stimson, and Penny Coppernoll-Blach, who are all librarians at the University of California, San Diego Library.

References

  • 1.

    Graber M. Diagnostic errors in medicine: a case of neglect. Jt Comm J Qual Patient Saf 2005;31:106–13.Google Scholar

  • 2.

    Zwaan L, de Bruijne M, Wagner C, Thijs A, Smits M, van der Wal G, et al. Patient record review of the incidence, consequences, and causes of diagnostic adverse events. Arch Intern Med 2010;170:1015–21.Google Scholar

  • 3.

    Leape L, Berwick D, Bates D. Counting deaths due to medical errors-Reply. J Am Med Asssoc 2002;288:2404–5.Google Scholar

  • 4.

    Mangalmurti SS, Harold JG, Parikh PD, Flannery FT, Oetgen WJ. Characteristics of medical professional liability claims against internists. J Am Med Assoc Intern Med 2014;174:993–5.Google Scholar

  • 5.

    Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the National Practitioner Data Bank. Br Med J Qual Saf 2013;22: 672–80.Google Scholar

  • 6.

    Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. Br Med J Qual Saf 2014; 23:727–31.Google Scholar

  • 7.

    Graber ML. The incidence of diagnostic error in medicine. Br Med J Qual Saf 2013;22 (Suppl 2):ii21–ii7.Google Scholar

  • 8.

    Mills DH, editor. Report on the Medical Insurance Feasibility Study. Sponsored jointly by California Medical Association and California Hospital Association. San Francisco, CA: Sutter Publications, Inc.; 1977.Google Scholar

  • 9.

    Mills DH. Medical insurance feasibility study. A technical summary. West J Med 1978;128:360–5.Google Scholar

  • 10.

    Hiatt HH, Barnes BA, Brennan TA, Laird NM, Lawthers AG, Leape LL, et al. A study of medical injury and medical malpractice. N Engl J Med 1989;321:480–4.Google Scholar

  • 11.

    Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med 1991;324:370–6.Google Scholar

  • 12.

    Leape LL, Brennan TA, Laird N, Lawthers AG, Localio AR, Barnes BA, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991;324:377–84.Google Scholar

  • 13.

    Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust 1995;163:458–71.Google Scholar

  • 14.

    Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. Br Med J 2001;322:517–9.Google Scholar

  • 15.

    Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events in New Zealand public hospitals I: occurrence and impact. N Z Med J 2002;115:U271.Google Scholar

  • 16.

    Baker GR, Norton PG, Flintoft V, Blais R, Brown A, Cox J, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. Can Med Assoc J 2004;170:1678–86.Google Scholar

  • 17.

    Zegers M, de Bruijne MC, Wagner C, Hoonhout LH, Waaijman R, Smits M, et al. Adverse events and potentially preventable deaths in Dutch hospitals: results of a retrospective patient record review study. Qual Saf Health Care 2009;18:297–302.CrossrefWeb of ScienceGoogle Scholar

  • 18.

    Classen DC, Lloyd RC, Provost L, Griffin FA, Resar R. Development and evaluation of the institute for healthcare improvement global trigger tool. J Patient Safety 2008;4:169–77.Google Scholar

  • 19.

    Singh H, Giardina TD, Forjuoh SN, Reis MD, Kosmach S, Khan MM, et al. Electronic health record-based surveillance of diagnostic errors in primary care. Br Med J Qual Saf 2012;21:93–100.Google Scholar

  • 20.

    Cihangir S, Borghans I, Hekkert K, Muller H, Westert G, Kool RB. A pilot study on record reviewing with a priori patient selection. BMJ Open 2013;3; pii: e003034.Google Scholar

  • 21.

    De Meester K, Van Bogaert P, Clarke SP, Bossaert L. In-hospital mortality after serious adverse events on medical and surgical nursing units: a mixed methods study. J Clin Nurs 2013;22:2308–17.Web of ScienceCrossrefGoogle Scholar

  • 22.

    Hwang JI, Chin HJ, Chang YS. Characteristics associated with the occurrence of adverse events: a retrospective medical record review using the Global Trigger Tool in a fully digitalized tertiary teaching hospital in Korea. J Eval Clin Pract 2014;20:27–35.Web of ScienceGoogle Scholar

  • 23.

    O’Leary KJ, Devisetty VK, Patel AR, Malkenson D, Sama P, Thompson WK, et al. Comparison of traditional trigger tool to data warehouse based screening for identifying hospital adverse events. Br Med J Qual Saf 2013;22:130–8.Google Scholar

  • 24.

    Pavão AL, Camacho LA, Martins M, Mendes W, Travassos C. Reliability and accuracy of the screening for adverse events in Brazilian hospitals. Int J Qual Health Care 2012;24:532–7.CrossrefWeb of ScienceGoogle Scholar

  • 25.

    Wilson RM, Michel P, Olsen S, Gibberd RW, Vincent C, El-Assady R, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. Br Med J 2012;344:e832.Web of ScienceGoogle Scholar

  • 26.

    Letaief M, El Mhamdi S, El-Asady R, Siddiqi S, Abdullatif A. Adverse events in a Tunisian hospital: results of a retrospective cohort study. Int J Qual Health Care 2010;22:380–5.CrossrefWeb of ScienceGoogle Scholar

  • 27.

    Naessens JM, O’Byrne TJ, Johnson MG, Vansuch MB, McGlone CM, Huddleston JM. Measuring hospital adverse events: assessing inter-rater reliability and trigger performance of the Global Trigger Tool. Int J Qual Health Care 2010;22:266–74.Web of ScienceCrossrefGoogle Scholar

  • 28.

    Cappuccio FP, Bakewell A, Taggart FM, Ward G, Ji C, Sullivan JP, et al. Implementing a 48 h EWTD-compliant rota for junior doctors in the UK does not compromise patients’ safety: assessor-blind pilot comparison. Q J Med 2009;102:271–82.Web of ScienceGoogle Scholar

  • 29.

    Soop M, Fryksmark U, Köster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care 2009;21:285–91.Web of ScienceCrossrefGoogle Scholar

  • 30.

    Kobayashi M, Ikeda S, Kitazawa N, Sakai H. Validity of retrospective review of medical records as a means of identifying adverse events: comparison between medical records and accident reports. J Eval Clin Pract 2008;14:126–30.CrossrefWeb of ScienceGoogle Scholar

  • 31.

    Mitchell IA, Antoniou B, Gosper JL, Mollett J, Hurwitz MD, Bessell TL. A robust clinical review process: the catalyst for clinical governance in an Australian tertiary hospital. Med J Aust 2008;189:451–5.Google Scholar

  • 32.

    Williams DJ, Olsen S, Crichton W, Witte K, Flin R, Ingram J, et al. Detection of adverse events in a Scottish hospital using a consensus-based methodology. Scott Med J 2008;53:26–30.Google Scholar

  • 33.

    Sari AB, Sheldon TA, Cracknell A, Turnbull A, Dobson Y, Grant C, et al. Extent, nature and consequences of adverse events: results of a retrospective casenote review in a large NHS hospital. Qual Saf Health Care 2007;16:434–9.Web of ScienceCrossrefGoogle Scholar

  • 34.

    Zegers M, de Bruijne MC, Wagner C, Groenewegen PP, Waaijman R, van der Wal G. Design of a retrospective patient record study on the occurrence of adverse events among patients in Dutch hospitals. BMC Health Serv Res 2007;7:27.Google Scholar

  • 35.

    Resar RK, Rozich JD, Simmonds T, Haraden CR. A trigger tool to identify adverse events in the intensive care unit. Jt Comm J Qual Patient Saf 2006;32:585–90.Google Scholar

  • 36.

    Herrera-Kiengelher L, Chi-Lem G, Báez-Saldaña R, Torre-Bouscoulet L, Regalado-Pineda J, López-Cervantes M, et al. Frequency and correlates of adverse events in a respiratory diseases hospital in Mexico city. Chest 2005;128:3900–5.Google Scholar

  • 37.

    Braithwaite R, DeVita M, Mahidhara R, Simmons R, Stuart S, Foraida M. Use of medical emergency team (MET) responses to detect medical errors. Qual Saf Health Care 2004;13:255–9.CrossrefGoogle Scholar

  • 38.

    Forster AJ, Asmis TR, Clark HD, Al Saied G, Code CC, Caughey SC, et al. Ottawa Hospital Patient Safety Study: incidence and timing of adverse events in patients admitted to a Canadian teaching hospital. CMAJ 2004;170:1235–40.Google Scholar

  • 39.

    Michel P, Quenon JL, de Sarasqueta AM, Scemama O. Comparison of three methods for estimating rates of adverse events and rates of preventable adverse events in acute care hospitals. Br Med J 2004;328:199.Google Scholar

  • 40.

    Chapman EJ, Hewish M, Logan S, Lee N, Mitchell P, Neale G. Detection of critical incidents in hospital practice: a preliminary feasibility study. Clinical Governance Bulletin 2003;4:8–9.Google Scholar

  • 41.

    Murff H, Forster A, Peterson J, Fiskio J, Heiman H, Bates D. Electronically screening discharge summaries for adverse medical events. J Am Med Inform Assoc 2003;10:339–50.Google Scholar

  • 42.

    Wolff AM, Bourke J, Campbell IA, Leembruggen DW. Detecting and reducing hospital adverse events: outcomes of the Wimmera clinical risk management program. Med J Aust 2001;174:621–5.Google Scholar

  • 43.

    Thomas E, Studdert D, Burstin H, Orav E, Zeena T, Williams E, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000;38:261–71.CrossrefGoogle Scholar

  • 44.

    Bates DW, O’Neil AC, Petersen LA, Lee TH, Brennan TA. Evaluation of screening criteria for adverse events in medical patients. Med Care 1995;33:452–62.CrossrefGoogle Scholar

  • 45.

    Wolff AM. Limited adverse occurrence screening: an effective and efficient method of medical quality control. J Qual Clin Pract 1995;15:221–33.Google Scholar

  • 46.

    Bates DW, O’Neil AC, Boyle D, Teich J, Chertow GM, Komaroff AL, et al. Potential identifiability and preventability of adverse events using information systems. J Am Med Inform Assoc 1994;1:404–11.Google Scholar

  • 47.

    O’Neil AC, Petersen LA, Cook EF, Bates DW, Lee TH, Brennan TA. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med 1993;119:370–6.Google Scholar

  • 48.

    Craddick JW, Bader B. Medical management analysis: a systematic approach to quality assurance and risk management. Auburn, California: Joyce W. Craddick; 1983.Google Scholar

  • 49.

    Iezzoni LI, Daley J, Heeren T, Foley SM, Fisher ES, Duncan C, et al. Identifying complications of care using administrative data. Med Care 1994;32:700–15.CrossrefGoogle Scholar

Supplemental Material

The online version of this article (DOI: 10.1515/dx-2014-0047) offers supplementary material, available to authorized users.

About the article

Corresponding author: Edna C. Shenvi, MD, MAS, Division of Biomedical Informatics, University of California, San Diego, 9500 Gilman Dr. MC 0728, La Jolla, CA 92093-0728, USA, Phone: 858-822-4931, E-mail:


Received: 2014-07-22

Accepted: 2014-09-12

Published Online: 2014-10-18

Published in Print: 2015-02-01


Author contributions: Conception and design of study: REK. Analysis and interpretation of data: ECS, REK. Drafting of the paper: ECS. Critical revision of paper for important intellectual content and final approval of the paper: REK, ECS. All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

Financial support: Dr. Shenvi is supported by the National Library of Medicine training grant T15LM011271, San Diego Biomedical Informatics Education and Research. Dr. El-Kareh is supported by K22LM011435-02, a career development award from the National Library of Medicine.

Employment or leadership: None declared.

Honorarium: None declared.

Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.


Citation Information: Diagnosis, Volume 2, Issue 1, Pages 3–19, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2014-0047.

Export Citation

©2014, Edna C. Shenvi and Robert El-Kareh, published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Supplementary Article Materials

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Paul A. Bergl, Rahul S. Nanchal, and Hardeep Singh
Annals of the American Thoracic Society, 2018, Volume 15, Number 8, Page 903
[2]
Melissa Sundberg, Catherine O. Perron, Amir Kimia, Assaf Landschaft, Lise E. Nigrovic, Kyle A. Nelson, Andrew M. Fine, Matthew Eisenberg, Marc N. Baskin, Mark I. Neuman, and Anne M. Stack
Diagnosis, 2018, Volume 5, Number 2, Page 63
[3]
Michael Usher, Nishant Sahni, Dana Herrigel, Gyorgy Simon, Genevieve B. Melton, Anne Joseph, and Andrew Olson
Journal of General Internal Medicine, 2018
[4]
Viraj Bhise, Dean F Sittig, Viralkumar Vaghani, Li Wei, Jessica Baldwin, and Hardeep Singh
BMJ Quality & Safety, 2018, Volume 27, Number 3, Page 241
[5]
Ekaterina Bakradze and Ava L. Liberman
Current Atherosclerosis Reports, 2018, Volume 20, Number 2

Comments (0)

Please log in or register to comment.
Log in