Skip to content
Publicly Available Published by De Gruyter July 24, 2020

Operational measurement of diagnostic safety: state of the science

  • Hardeep Singh EMAIL logo , Andrea Bradford and Christine Goeschel
From the journal Diagnosis


Reducing the incidence of diagnostic errors is increasingly a priority for government, professional, and philanthropic organizations. Several obstacles to measurement of diagnostic safety have hampered progress toward this goal. Although a coordinated national strategy to measure diagnostic safety remains an aspirational goal, recent research has yielded practical guidance for healthcare organizations to start using measurement to enhance diagnostic safety. This paper, concurrently published as an Issue Brief by the Agency for Healthcare Research and Quality, issues a “call to action” for healthcare organizations to begin measurement efforts using data sources currently available to them. Our aims are to outline the state of the science and provide practical recommendations for organizations to start identifying and learning from diagnostic errors. Whether by strategically leveraging current resources or building additional capacity for data gathering, nearly all organizations can begin their journeys to measure and reduce preventable diagnostic harm.


Diagnostic errors pose a significant threat to patient safety, resulting in substantial preventable morbidity and mortality and excess healthcare costs [1]. Diagnostic errors are also among the most frequent reasons for medical malpractice claims [2]. In 2015, the National Academies of Sciences, Engineering, and Medicine (NASEM) highlighted the scope and significance of diagnostic safety in its report Improving Diagnosis in Health Care [3]. Among NASEM’s recommendations is a call for accrediting bodies to require healthcare organizations (HCOs) to “monitor the diagnostic process and identify, learn from, and reduce diagnostic errors and near misses in a timely fashion” [3].

Measurement of diagnostic performance is necessary for any systematic effort to improve diagnostic quality and safety. While numerous healthcare performance measures exist (the National Quality Forum [NQF] currently endorses hundreds of measures) [4], [5], none is being used routinely to assess and address diagnostic errors. Only a few U.S. healthcare organizations have explored measurement of diagnostic safety, and the development of diagnostic safety measures remains in its infancy [6]. However, diagnostic errors are increasingly prominent in the national conversation on patient safety.

Several stakeholders have recently launched initiatives and research projects to advance development and implementation of diagnostic safety measurement [7], [8], [9], [10], [11]. These stakeholders include the Agency for Healthcare Research and Quality, the NQF, the Centers for Medicare & Medicaid Services, the Society to Improve Diagnosis in Medicine, and philanthropic foundations such as the Gordon and Betty Moore Foundation. It is thus reasonable to expect that HCOs will face increasing expectations to measure and improve diagnostic safety as part of their quality and safety programs.

In parallel with calls for improving diagnostic safety is a growing emphasis on the concept of a learning health system (LHS). LHSs can be conceptualized at various levels, including the care team level, the HCO level, the external level (e.g., State or national), or within a specific improvement initiative or collaborative. In LHSs, leaders are committed to improvement, outcomes are systematically gathered, variation in care within the system is systematically analyzed to inform and improve care, and a continuous feedback cycle facilitates quality improvement based on evidence [12].

Rigorous measurement of quality and safety should be an essential component of an LHS [13], [14]. Further, measurement is only valuable to the extent that it is actionable and leads to improved care delivery and outcomes [15]. New and emerging measures of diagnostic safety should therefore be evaluated not only in terms of their validity but also their potential to inform pragmatic strategies to improve diagnosis [16]. At present, most of the tools and strategies that organizations use to detect patient safety concerns cannot always specifically detect diagnostic error, and even when these errors are uncovered, analysis and learning are limited. [17]

Although a coordinated national strategy to measure diagnostic safety remains an aspirational goal, recent research has yielded practical guidance for HCOs to start using measurement to enhance diagnostic safety. Measurement strategies validated in research settings are approaching readiness for implementation in an operational context. Equipped with this knowledge, HCOs have an opportunity to develop and implement strategies to learn from diagnostic safety events within their own walls.

In this Issue Brief, we discuss the state of the science of operational measurement of diagnostic safety, informed by recent peer-reviewed scientific publications, innovations in real-world healthcare settings, and initiatives to spur further development of diagnostic safety measures. Our aim is to provide knowledge and recommendations to encourage HCOs to begin to identify and learn from diagnostic errors.

Special considerations for measurement of diagnostic safety

Defining diagnostic errors and diagnostic performance

Measurement begins with a definition. The recent NASEM report defines diagnostic error as the “failure to establish an accurate and timely explanation of the patient’s health problem(s) or communicate that explanation to the patient” [3]. This definition provides three key concepts that need to be operationalized, namely:

  1. Accurately identifying the explanation (or diagnosis) of the patient’s problem,

  2. Providing this explanation in a timely manner, and

  3. Effectively communicating the explanation.

Whereas these criteria are clear-cut in some cases (e.g., a patient diagnosed with bronchitis who is having a pulmonary embolus), they are more ambiguous in others, lacking consensus even among experienced clinicians [18], [19]. Uncertainty in the diagnostic process is ubiquitous and accuracy is often poorly defined, with no widely accepted standards for how long a diagnosis should take for most conditions. Furthermore, patient presentations often evolve over time, and sometimes the best approach is to defer diagnosis or testing to a later time, or to not make a definitive diagnosis until more information is available or if symptoms persist or evolve [20].

Diagnostic performance can be defined not only by accuracy and timeliness but also by efficiency (e.g., minimizing resource expenditure and limiting the patient’s exposure to risk) [21]. Measurement of diagnostic performance should hence consider the broader context of value-based care, including quality, risks, and costs, rather than focus simply on achieving the correct diagnosis in the shortest time [16], [22].

Understanding the multifactorial context of diagnostic safety

Diagnostic errors are best conceptualized as events that may be distributed over time and place and shaped by multiple contextual factors and complex dynamics involving system-related, patient-related, team-related, and individual cognitive factors. Availability of clinical data that provide a longitudinal picture across care settings is essential to understanding a patient’s diagnostic journey.

A multidisciplinary framework, the Safer Dx framework [14] (Figure 1), has been proposed to help advance the measurement of diagnostic errors. The framework follows the Donabedian Structure–Process–Outcome model [23], which approaches quality improvement in three domains:

  1. Structure (characteristics of care providers, their tools and resources, and the physical/organizational setting);

  2. Process (both interpersonal and technical aspects of activities that constitute healthcare); and

  3. Outcome (change in the patient’s health status or behavior).

Figure 1: Safer Dx framework.
Figure 1:

Safer Dx framework.

The recent NASEM report adapted concepts from the Safer Dx framework and other sources to generate a similar framework that emphasizes system-level learning and improvement [3].

Measurement must account for all aspects of the diagnostic process as well as the distributed nature of diagnosis, i.e., evolving over time and not limited to what happens during a single clinician-patient visit. The five interactive process dimensions of diagnosis include:

  1. Patient-clinician encounter (history, physical examination, ordering of tests/referrals based on assessment);

  2. Performance and interpretation of diagnostic tests;

  3. Followup and tracking of diagnostic information over time;

  4. Subspecialty and referral-specific factors; and

  5. Patient-related factors [24].

Diagnostic performance is the outcome of these processes within a complex, adaptive sociotechnical system [25], [26]. Safe diagnosis (as opposed to missed, delayed, or wrong) is an intermediate outcome compared with more distal patient and healthcare delivery outcomes.

The Safer Dx framework is interactive and acts as a continuous feedback loop. It includes the complex adaptive sociotechnical work system in which diagnosis takes place (structure), the process dimensions in which diagnoses evolve beyond the doctor’s visit (process), and resulting outcomes of “safe diagnosis,” (i.e., correct and timely), as well as patient and healthcare outcomes (outcomes). Valid and reliable measurement of diagnostic errors results in collective mindfulness, organizational learning, improved collaboration, and better measurement tools and definitions. In turn, these proximal outcomes enable overall safer diagnosis, which contributes to both improved patient outcomes and value of healthcare. The knowledge created by measurement will also lead to changes in policy and practice to reduce diagnostic errors as well as feedback for improvement.

Understanding these special considerations for measurement is essential to avoid confusion with other care processes (e.g., screening, prevention, management, or treatment). It also keeps diagnostic safety as a concept distinct from other common concerns such as communication breakdowns, readmissions, and care transitions, which may be either contributors or outcomes related to diagnostic errors. The Safer Dx framework underscores that diagnostic errors can emerge across multiple episodes of care and that corresponding clinical data across the care continuum are essential to inform measurement. However, in reality these data are not always available.

Providers and HCOs, often unaware of their patients’ ultimate diagnosis-related outcomes, could benefit from measurement of diagnostic performance, which can then contribute to better system-level understanding of the problem and enable improvement through feedback and learning [27], [28], [29]. Developing and implementing measurement methods that focus on what matters is consistent with LHS approaches [30]. In the absence of robust and accurate measures for diagnostic safety or any current external accountability metric, HCOs should focus on implementing measurement strategies that can lead to actionable data for improvement efforts [31].

Choosing data sources for measurement

Measurement must account for real-world clinical practice, taking into consideration not only the individual clinician’s decision making and reasoning but also the influence of systems, team members, and patients on the diagnostic process. As a first step, safety professionals need access to both reliable and valid data sources, ideally across the longitudinal continuum of patient care, as well as pragmatic tools to help measure and address diagnostic error.

Many methods have been suggested to study diagnostic errors [32]. While most of them have been evaluated in research settings, few HCOs take a systematic approach to measure or monitor diagnostic error in routine clinical care [33]. However, with appropriate tools and guidance, all HCOs should be able to adopt systematic approaches to measure and learn about diagnostic safety.

Not all methods to measure diagnostic performance are feasible to implement given limited time and resources. For instance, direct video observations of encounters between clinicians and patients would be excellent, but these are expensive and difficult to sustain on the scale needed for systemwide change [34]. In contrast, a more viable alternative for most HCOs is to leverage existing data sources, such as large electronic health record (EHR) data repositories.

EHRs can be useful for measuring both discrete (and perhaps relatively uncommon) events and risks that are systematic, recurrent, or outside clinicians’ awareness. For example, HCOs with the resources to query or mine electronic data repositories have options for both retrospective and prospective measurements of diagnostic safety [35].

Even in the absence of standardized methods for measuring diagnostic error, HCOs should measure, learn from, and intervene to address threats to diagnostic safety by following basic measurement principles. For instance, the recently released Salzburg Statement on Moving Measurement Into Action was published through a collaboration between the Salzburg Global Seminar and the Institute for Healthcare Improvement. The statement recognizes the absence of standard and effective safety measures and provides eight broad principles for patient safety measurement. These principles act as “a call to action for all stakeholders in reducing harm, including policymakers, managers and senior leaders, researchers, health care professionals, and patients, families, and communities” [36].

In the same spirit, we propose several general assumptions that should guide and facilitate institutional practices for measurement of diagnostic safety and lead to actionable intelligence that identifies opportunities for improvement:

  • The underlying motivation for measurement is to learn from errors and improve clinical operations, as opposed to merely responding to external incentives or disincentives.

  • Efforts should focus on conditions that have been shown to be relatively common in being missed, which leads to patient harm.

  • Measurement should not be used punitively to identify provider failures but rather should be used to uncover system-level problems [37]. Information should be used to inform system fixes and related organizational processes and procedures that constitute the sociotechnical context of diagnostic safety.

  • Use of a single method to identify diagnostic error, such as manual chart reviews or voluntary reporting, will have limited effectiveness [38], [39]. For ongoing, widespread monitoring at an organizational level, a combination of methods will have higher yield.

  • Hindsight bias, i.e., when knowledge of the outcome significantly affects perception of past events, is inherent in most analysis and will need to be minimized by focusing on how missed opportunities can become learning opportunities.

With this general guidance in mind, we recommend that HCOs begin to monitor diagnostic safety using the most robust data sources currently available. Table 1 describes various data sources and strategies that could enable measurement of diagnostic error, along with their respective strengths and limitations. The following sections describe these data sources and strategies and the evidence to support their use for measurement of diagnostic safety.

Table 1:

Data sources and related strategies for measurement of diagnostic safety.

Data sourceWhat can be learnedApproaches to operationalizing data collectionApproaches to operationalizing data synthesisStrengthsLimitationsExamples
Routinely recorded quality and safety events
  • Awareness of the impact and harm of diagnostic safety events

  • Emerging patterns that may suggest high-risk situations

  • Peer review

  • Morbidity and mortality conferences

  • Adverse event or incident reports

  • Risk management sources

  • Malpractice claims

  • Autopsy

  • Content analysis of related sources of clinical data

  • Record reviews

  • High level of detail

  • Focus on events with clear potential for harm

  • Captured events that are not representative

  • Unknown scope and frequency of errors

  • Meeks et al. [42]

  • Cifra et al. [113]

  • Gupta et al. [114]

  • Shojania et al. [115]

Solicited clinician reports
  • Understanding of system-related and cognitive factors that affect diagnostic safety

  • Web-based, telephone-based, written, and in-person mechanisms to report suspected safety breakdowns

  • Followup interviews with clinicians

  • Content analysis

  • Record reviews

  • Real-time assessment or intervention*

  • May capture safety breakdowns undetected by algorithmic methods, including “near-miss” and low-harm events

  • Can capture events in real time

  • Needs engaged clinicians and institutional support for maintenance

  • Captured events that are not representative

  • Underestimates of frequency of safety events

  • Okafor et al. [52]

Solicited patient reports
  • Understanding of system-related, patient-related, and communication factors that affect diagnostic safety

  • Surveys

  • Interviews

  • Followup of patient complaints to institutions, licensing boards, and accrediting organizations

  • Real-time reporting (e.g., inpatient hotline)

  • Content analysis

  • Record review

  • Real-time assessment or intervention*

  • May capture safety breakdowns undetected by algorithmic methods or manual chart review

  • Can capture events in real time

  • Lack of validated measurement strategies

  • Captured events that may not be representative

  • Underestimates of frequency of safety events

  • Walton et al. [116]

  • Fowler et al. [117]

  • Giardina et al. [61]

Administrative billing data
  • Detection of diagnostic safety related patterns and events within a cohort defined by high-risk indicators

  • ICD-based indicators applied to administrative datasets to select cohorts for further review (ICD = International Classification of Diseases)

  • Statistical analysis

  • Data that are routinely collected and widely accessible

  • Validity for case finding not established

  • Variable data quality

  • Lack of detailed and contextual clinical data in administrative datasets

  • Liberman, Newman–Toker [64]

  • Mahajan et al. [66]

Medical records (random, selective, and e-trigger enhanced)
  • Detection of diagnostic safety events within a sample of screened or randomly selected records

  • Insight into vulnerable structures and processes

  • Manual review by one or more trained raters

  • Selection of high-risk cohorts (such as patients with cancer or epidural abscess)

  • Use of electronic triggers to flag charts for manual review

  • Record review (e.g., using structured review tool such as the Safer Dx instrument)

  • Real-time assessment or intervention*

  • High level of detail regarding the course of clinical care

  • Acceptable agreement for case-finding by trained reviewers

  • Intervention and investigation with minimal recall bias possible due to detection of safety concerns in real time

  • Inefficient (more efficient with selected high-risk cohorts or use of e-triggers)

  • Variable quality of clinical documentation

  • Limited to data systems with large repositories of data that can be queried

  • Dedicated monitoring resources needed for real-time application

  • Bhise et al. [70]

  • Murphy et al. [78]

  • Singh et al. [19]

Advanced data science methods, including natural language processing of EHR data*
  • Detection of possible diagnostic safety events (retrospectively or in real time) based on machine-learning algorithms

  • Application of algorithms to clinical documentation or other unstructured EHR data to search for language indicating possible safety concerns

  • Record review

  • Real-time assessment or intervention*

  • Can capture data that may be missed by administrative data-based screening

  • May enhance efficiency of record review

  • Validity for case finding unknown

  • Limited to data systems with large repositories of data that can be queried

  • Dedicated monitoring resources needed for real-time application

  • None known

  1. Note: Approaches marked with an asterisk (*) have potential for future application but are not currently developed for real-time surveillance.

Learning from known incidents and reports

No single data source will capture the full range of diagnostic safety concerns. Valuable information can be gleaned from even limited data sources so long as those who use the data remain mindful of its limitations for a given purpose [40]. For instance, many routinely recorded discrete events lend themselves to retrospective analysis of diagnostic safety. Most healthcare organizations have incident reporting systems, although reporting has included few diagnostic events [41].

There is also an opportunity to leverage peer review programs to improve diagnostic self-assessment, feedback, and improvement [42]. Similarly, autopsy reports [43], diagnostic discrepancies at admission versus discharge [44], [45], escalations of care [46], [47], and malpractice claims [48], [49], [50], [51] may be reviewed with special attention to opportunities to improve diagnosis. These data sources may not shed light on the frequency or scope of a problem, but they can help raise awareness of the impact and harm of diagnostic errors and, in some cases, specific opportunities for improvement.

Voluntary reports solicited specifically from clinicians who make diagnoses are another potentially useful source of data on diagnostic safety. For example, reports from clinicians who have witnessed diagnostic error have the advantage of rich detail that, at least in some cases, may offer insight into ways to prevent or mitigate future errors. However, no standardized mechanisms exist to report diagnostic errors. Despite widespread efforts to enable providers to report errors [17], [52], clinicians find reporting tools onerous and are often unaware of errors they make [53].

It has also become clear that a local champion and quality improvement team support are needed to sustain reporting behavior. At present, few facilities or organizations allocate “protected time” that is essential for clinicians to report, analyze, and learn from safety events. Some of these challenges could be overcome by having frontline clinicians report events briefly and allowing organizational safety teams (which include other clinicians) to analyze them. Still, voluntary reporting alone cannot address the multitude of complex diagnostic safety concerns, and reporting can only be one aspect of a comprehensive measurement strategy [54].

Patients are often underexplored sources of information, and many organizations already conduct patient surveys, interviews, and reviews of patient complaints to learn about safety risks [55], [56], [57], [58]. Prior work in other areas of patient safety (e.g., medication errors, infection control) has examined the potential of engaging patients proactively to monitor safety risks and problems [59], [60]. With subsequent development, similar mechanisms could be used to monitor diagnosis-related issues.

One limitation of current patient reporting systems is the lack of validated patient-reported questions or methods to detect diagnostic safety concerns. Barriers to patient engagement in safety initiatives, including low health literacy, lack of wider acceptance of safety monitoring as part of the patient role, provider expectations and attitudes, and communication differences, will also need to be addressed to make the most of such efforts [59], [61], [62]. Real-time adverse event and “near-miss” reporting systems, akin to those intended for use by clinicians, are another potential mechanism to collect patient-reported data on diagnostic safety [63].

Learning from existing large datasets

Whereas direct reports from patients and clinicians may offer unique insights, HCOs may also be ready to use large datasets to uncover trends in diagnostic performance and events that are not otherwise easily identified. Administrative and billing data are widely available in most modern HCOs and have been proposed as one source of data for detecting missed opportunities for accurate and timely diagnosis [64], [65], [66]. For example, diagnosis codes assigned at successive clinical encounters may be used as a proxy for the evolution of a clinical diagnosis; if significant discrepancies are found, it may lead to a search for reasons [44]. Using symptom-disease-based dyads, such as abdominal pain followed by appendicitis a few days later or dizziness followed by stroke, are examples of this approach [66], [67].

This strategy may be intuitive to patient safety leaders because several other safety measurement methods are also based on diagnosis codes extracted from administrative datasets (e.g., patient safety indicators) [68], [69]. However, unlike specific safety events that can be coded with good sensitivity (e.g., retention of foreign objects during procedures), administrative data are not sufficiently sensitive to detect diagnostic errors. Moreover, administrative data lack relevant clinical details about diagnostic processes that can be improved. Administrative data sources are mainly useful insofar as they can be used to identify patterns or specific cohorts of patients to further review for presence of diagnostic error, based on diagnosis or other relevant characteristics that may be considered risk factors or of special interest for diagnostic safety improvement.

Medical records can be a rich source of data, as they contain clinical details and reflect the patient’s longitudinal care journey. Although medical record reviews are considered valuable and sometimes even gold standard for detecting diagnostic errors, it is often not clear which records to review. Reviewing records at random can be burdensome and resource intensive. However, more selective methods can identify a high-yield subset (e.g., reviewing records of patients diagnosed with specific clinical conditions at high risk of being missed, such as colorectal cancer or spinal epidural abscess) [70], [71]. Such selective methods can be more efficient compared with voluntary reporting or nonselective or random manual review.

Another way to select records is through the “trigger” approach, which aims to “alert patient safety personnel to possible adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred” [72], [73], [74], [75]. EHRs and clinical data warehouses make it possible to identify signals suggestive of missed diagnosis prior to detailed reviews. HCOs can search EHRs on a scale that would be untenable using manual or random search methods using electronic algorithms, or “e-triggers,” which mine vast amounts of clinical and administrative data to identify these signals [74], [76], [77].

For example, algorithms could identify patients with a certain type of abnormal test result (denominator) and identify which results have still not been acted upon after a certain length of time (numerator) [78]. This type of algorithm is possible because the data about an abnormal result (e.g., abnormal hemoglobin value and microcytic anemia) and the followup action needed (e.g., colonoscopy for the 65-year-old) are (or should be) coded in the EHR. Similarly, certain patterns, such as unexpected hospitalizations after a primary care visit, can be identified more accurately if corresponding clinical data are available [19].

To enhance the yield of record reviews, e-triggers can be developed to alert personnel to potential patient safety events and enable targeted review of high-risk patients [35]. The Institute for Healthcare Improvement’s Global Trigger Tools [79], which include both manual and electronic trigger tools to detect inpatient events [80], [81], [82], are widely used but were not specifically designed to detect diagnostic errors. More targeted e-triggers for diagnostic safety measurement and monitoring can be developed and integrated within existing patient safety surveillance systems in the future and may enhance the yield of record reviews [46], [83], [84], [85], [86]. Other examples of possible e-triggers that are either in development or approaching wider testing and implementation are described in Table 2.

Table 2:

Examples of potential Safer Dx e-triggers mapped to diagnostic process dimensions of the Safer Dx framework [14] (adapted from Murphy et al. [35]).

Safer Dx diagnostic processSafer Dx trigger examplePotential diagnostic error
Patient-provider encounterEmergency department or primary care visit followed by unplanned hospitalizationMissed red flag findings or incorrect diagnosis during initial office visit
ED/PC visit within 72 h after ED or hospital dischargeMissed red flag findings during initial ED/PC or hospital visit
Unexpected transfer from hospital general floor to ICU within 24 h of ED admissionMissed red flag findings during admission
Performance and interpretation of diagnostic testsAmended imaging reportMissed findings on initial read or lack of communication of amended findings
Follow-up and tracking of diagnostic informationAbnormal test result with no timely followup actionAbnormal test result missed
Referral-related factorsUrgent specialty referral followed by discontinued referral within 7 daysDelay in diagnosis from lack of specialty expertise
Patient-related factorsPoor rating on patient experience scores post ED/PC visitPatient report of communication barriers related to missed diagnosis

Both selective and e-trigger enhanced reviews are advantageous because they can allow development of potential measures for wider testing and adoption. For instance, measures of diagnostic test result follow-up may focus specifically on abnormal test results that are suggestive of serious or time-sensitive diagnoses, such as cancer [87]. Examples of diagnostic safety measures could include the proportion of documented “red-flag” symptoms or test results that receive timely follow-up, or the proportion of patients with cancer newly diagnosed within 60 days of first presentation of known red flags [16]. Depending on an HCO’s priorities, safety leaders could consider additional development, testing, and potential implementation of measure concepts proposed by NQF and other researchers [8], [16], [88].

Other mechanisms for mining clinical data repositories, such as natural language processing (NLP) algorithms [89], are too early in development but may be a useful complement to future e-triggers for detection of diagnostic safety events. Whereas e-triggers leverage structured data to identify possible safety concerns, a substantial proportion of data in EHRs is unstructured and therefore inaccessible to e-triggers. NLP and machine-learning techniques could help analyze and interpret large volumes of unstructured textual data, such as those in clinical narratives and free-text fields, and reduce the burden of medical record reviews by selecting records with potentially highest yield. While research has examined the predictive validity of NLP algorithms for detection of safety incidents and adverse events [90], [91], [92], to date this methodology has not been applied to or validated for measurement of diagnostic safety.

Synthesizing data and enhancing confidence in measurement

Determining the presence of diagnostic error is complex and requires additional evaluation for missed opportunities [93]. A binary classification (e.g., error or no error) may be insufficient for cases involving greater uncertainty, which call for more graded assessment approaches reflecting varying degrees of confidence in the determination of error [20], [94].

Depending on the measurement method, a thorough content review may be sufficient to identify missed opportunities for diagnosis, contributing factors, and harm or impact. However, systematic approaches to surveillance and measurement of diagnostic safety often warrant the use of structured data collection instruments [95], [96] that assess diagnostic errors using objective criteria, as well as taxonomies to classify process breakdowns. For example, the Revised Safer Dx Instrument [97] is a validated tool that can be used to do an initial screen for presence or absence of diagnostic error in a case. It helps users identify potential diagnostic errors in a standardized way for further analysis and safety improvement efforts.

Structured assessment instruments can also be used to provide clinicians with data for feedback and reflection that they otherwise may not receive [98]. Furthermore, process breakdowns can be analyzed using approaches such as the Diagnostic Error Evaluation and Research taxonomy [99] or the Safer Dx Process breakdown taxonomy [97], both of which help to identify where in the diagnostic process a problem occurred. Other factors related to diagnostic error, such as the presence of patient harm (e.g., clear evidence of harm versus “near-misses”), preventability, and actionability, may also be important to define in advance so that the selected measurement strategy aligns with the learning and improvement goals. Nevertheless, the science of understanding the complex mix of cognitive and sociotechnical contributory factors and implementing effective solutions based on these data still needs additional development [100], [101], [102].

Getting ready for measurement: overcoming barriers and taking next steps

HCOs are resource constrained, and efforts to measure quality and safety of healthcare are anchored foremost to the measures specifically required by accrediting agencies and payers. Currently, measures of diagnostic performance and safety are not among these required measures, and the burden of additional data collection is a daunting prospect for HCOs that already struggle to meet existing requirements. Furthermore, we need national consensus regarding what aspects of diagnostic safety can be measured pragmatically in real-world care settings.

The current lack of incentives and models to measure diagnostic safety can make it difficult to get started even in the best-case scenario when leadership support and resources are available to support improvement efforts focused on diagnosis. Nevertheless, as the burden of diagnostic errors is increasingly recognized and as measurement strategies become better defined, diagnostic safety will no longer remain sidelined in the landscape of healthcare quality and safety.

Despite the additional burden that measurement for improvement would entail, the benefit of these activities for improving patient safety and stimulating learning and feedback could outweigh concerns. In fact, some HCOs have already started on their journeys, including one that has pursued measurement on multiple fronts [103] and another that aims to pursue measurement as part of becoming a “Learning and Exploration of Diagnostic Excellence (LEDE)” organization [33]. Now is the time for other HCOs to use the data already available to them to begin to detect, understand, and learn from diagnostic errors.

While diagnostic errors occur across the spectrum of medical practice, measurement should be strategic and focused on areas with strong potential for learning and impact. As with other aspects of patient safety measurement, goals should include:

  • Creating learning opportunities from past events with both potential and real harm,

  • Ensuring reliability in diagnostic safety,

  • Anticipating and preparing for problems related to diagnosis, and

  • Integrating and learning from the knowledge generated [13].

Absent a specific local or institutional need or mandate calling for a specific measurement target, we recommend that HCOs new to measurement of diagnostic safety focus initial measurement efforts on a limited set of realistic measures that map to one or a few specific diagnoses or care processes. Measurement targets may be in line with:
  • Published literature (e.g., research showing a high rate of missed test results),

  • Priorities identified in national initiatives (e.g., timely diagnosis of cancer), or

  • Local needs identified by quality and safety committees (e.g., focusing on specific symptoms such as abdominal pain [104] or specific diseases such as spinal epidural abscess) [70].

Once a target is identified, HCOs should use measurement strategies that balance validity (i.e., for case finding) with yield. Figure 2 visualizes the implementation readiness of several diagnostic safety measurement strategies discussed in this brief. For example, e-trigger enhanced structured chart review appears suitable for operational measurement with additional development. Systems without EHR capabilities can use other data sources (e.g., selective reviews, event reports) to begin measurement as soon as possible, rather than wait for the information technology infrastructure needed to implement EHR-based algorithms. The strategies depicted in Figure 2 represent the current state of the science and are subject to change as new innovations are validated.

Figure 2: Implementation readiness of diagnostic safety measurement strategies. Larger circles denote higher potential yield for cases that can inform systemwide learning and improvement. Measurement strategies that are ready for implementation balance validity and yield (i.e., an estimate of the proportion of cases with diagnostic errors that could lead to learning and improvement relative to measurement effort). The relative position of these methods will vary according to local context.
Figure 2:

Implementation readiness of diagnostic safety measurement strategies. Larger circles denote higher potential yield for cases that can inform systemwide learning and improvement. Measurement strategies that are ready for implementation balance validity and yield (i.e., an estimate of the proportion of cases with diagnostic errors that could lead to learning and improvement relative to measurement effort). The relative position of these methods will vary according to local context.

Table 3 presents a summary of the strategies and the factors affecting their use.

Table 3:

Implementation readiness of diagnostic safety measurement strategies and estimated yield relative to effort.

Measurement strategyStage of developmentCurrent potential availability and/or accessibility of data sourceEstimated yield relative to effort
Review of solicited reports from patientsExploratoryLowMedium
Advanced data science methods using EHR data (e.g., NLP)ExploratoryLowVery large
Mining administrative billing dataExploratoryHighVery small
E-trigger enhanced chart reviewModerateModerateVery large
Institutional peer review processesModerateHighMedium
Morbidity and mortality conferencesModerateHighMedium
Review of solicited brief reports from cliniciansModerateModerateVery large
Selective chart review of high-risk cohortsMatureHighLarge
Random chart reviewMatureHighVery small
Review of autopsy reportsMatureLowLarge
Review of malpractice claimsMatureHighMedium
Review of incident reportsMatureHighSmall

Based on what HCOs are learning, they could conduct additional activities related to quality improvement, such as:

  • Better managing test results to make sure they are acted on [105],

  • Conducting a self-assessment of reliability of communication and reporting of test results [106],

  • Closing referral loops [107], and

  • Enhancing teamwork and communication with patients [108].

Meanwhile, research should stimulate the development of more rigorous measures that are better correlated with not just more timely and accurate diagnosis but also less preventable diagnostic harm.

For additional implementation and adoption, especially beyond measurement to inform institutional quality improvement activities that we discuss in this brief, additional balance measures will need to be developed. For example, measures that focus on underdiagnosis of a certain condition (stroke) may lead to overtesting for that condition (magnetic resonance imaging on all patients with dizziness in primary or emergent care). Thus, research efforts must continue to inform and strengthen the rigor of measurement.

Other unintended consequences, including gaming and measure fatigue, may emerge from implementing suboptimal and inadequately validated performance measures for public reporting, performance incentives, or penalties. For measures to be used for accountability, they will first need to be tested and validated and linked to improving suboptimal care or outcomes [109]. This document and its recommendations thus focus largely on measurement for improvement at the HCO level and not measurement for accountability purposes.

In the long term, measures will need to be developed to address diagnostic excellence, defined in terms of the ability to make a diagnosis using the fewest resources while maximizing patient experiences, managing and communicating uncertainty to patients, and tolerating watchful waiting when unfocused treatment may be harmful [110]. As measurement methods evolve, they should address concepts such as uncertainty and clinician calibration (the degree to which clinicians’ confidence in the accuracy of their diagnostic decision making aligns with their actual accuracy [111]). Understanding and measuring these concepts in the diagnostic safety equation is essential to optimize the balance between reducing overuse and addressing underuse of diagnostic tests and other resources [112].


While measurement of diagnostic error has been challenging, recent research and early implementation in operational settings have provided new evidence that can now stimulate progress. Whether by leveraging current resources or building capacity for additional data gathering, HCOs have a variety of options to begin measurement to reduce preventable diagnostic harm. The measurements can be implemented progressively and, eventually, can be supported by payers and accrediting organizations. This overview is a call to action for leaders within both large and small HCOs in any setting to assess how they could begin their journey to measure and reduce preventable diagnostic harm and then to act on that assessment.

Corresponding author: Hardeep Singh, MD, MPH, Center for Innovations in Quality, Effectiveness, and Safety (IQuESt), Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA; and Baylor College of Medicine, 2002 Holcombe Blvd. #152, Houston, TX, USA, E-mail:

Article note: This article is being published concurrently with AHRQ Publication No. 20-0040-1-EF, available at:

Award Identifier / Grant number: 75P00119R00265


This article is being published concurrently with AHRQ Publication No. 20-0040-1-EF. Singh H, Bradford A, Goeschel C. Operational Measurement of Diagnostic Safety: State of the Science. AHRQ Publication #20-0040-1-EF. Rockville, MD: Agency for Healthcare Research and Quality; April 2020.

  1. Research funding: This paper was funded under Contract No. HHSP233201500022I/75P00119F37006 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this document’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this product as an official position of AHRQ or of the U.S. Department of Health and Human Services. Dr. Singh is funded in part by the Houston Veterans Administration (VA) Health Services Research and Development (HSR&D) Center for Innovations in Quality, Effectiveness, and Safety (CIN13-413), the VA HSR&D Service (CRE17-127 and the Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety, the Agency for Healthcare Research and Quality (R01HS27363), and the Gordon and Betty Moore Foundation.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: The funding organization(s) reviewed the contents of the issue brief but otherwise had no role in the writing of the report; or in the decision to submit the report for publication. The views expressed in this article do not represent the views of the U.S. Department of Veterans Affairs or the United States government.


1. Singh, H, Graber, ML. Improving diagnosis in health care - the next imperative for patient safety. N Engl J Med 2015;373:2493–5.10.1056/NEJMp1512241Search in Google Scholar

2. Saber Tehrani, AS, Lee, H, Mathews, SC, Shore, A, Makary, MA, Pronovost, PJ, et al. 25-year summary of U.S. malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf 2013;22:672–80.10.1136/bmjqs-2012-001550Search in Google Scholar

3. National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care; 2015. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

4. National Quality Forum. National quality Forum home page; 2020. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

5. National Quality Forum. Measures, reports & tools; 2020. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

6. Graber, ML, Wachter, RM, Cassel, CK. Bringing diagnosis into the quality and safety equations. J Am Med Assoc 2012;308:1211–2.10.1001/2012.jama.11913Search in Google Scholar

7. Gordon and Betty Moore Foundation. New projects aim to develop clinical quality measures to improve diagnosis; 2020. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

8. National Quality Forum. Reducing diagnostic error: measurement considerations; 2019. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

9. Agency for Healthcare Research and Quality. Diagnostic safety and quality; 2020. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

10. Centers for Medicare & Medicaid Services. TEP current panels; 2020. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

11. Health Research & Educational Trust. Improving diagnosis in medicine. Diagnostic error change package; 2018. Available from: [Accessed 1 Apr 2020].Search in Google Scholar

12. Olsen, LA, Aisner, D, McGinnis, JM, eds. Institute of medicine (US) roundtable on evidence-based medicine. The learning healthcare system: workshop summary. The national Academies collection: reports funded by national institutes of health. National Academies Press: Washington, DC; 2007. Available from: [Accessed 1 Apr 2020].Search in Google Scholar

13. Vincent, C, Burnett, S, Carthey, J. Safety measurement and monitoring in healthcare: a framework to guide clinical teams and healthcare organisations in maintaining safety. BMJ Qual Saf 2014;23:670–7.10.1136/bmjqs-2013-002757Search in Google Scholar

14. Singh, H, Sittig, DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf 2015;24:103–10.10.1136/bmjqs-2014-003675Search in Google Scholar

15. McGlynn, EA, McDonald, KM, Cassel, CK. Measurement is essential for improving diagnosis and reducing diagnostic error: a report from the Institute of Medicine. J Am Med Assoc 2015;314:2501–2.10.1001/jama.2015.13453Search in Google Scholar

16. Singh, H, Graber, ML, Hofer, TP. Measures to improve diagnostic safety in clinical practice. J Patient Saf 2019;15:311–6.10.1097/PTS.0000000000000338Search in Google Scholar

17. Graber, ML, Trowbridge, RL, Myers, JS, Umscheid, CA, Strull, W, Kanter, MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014;40:102–10.10.1016/S1553-7250(14)40013-8Search in Google Scholar

18. Zwaan, L, de Bruijne, M, Wagner, C, Thijs, A, Smits, M, van der Wal, G, et al. Patient record review of the incidence, consequences, and causes of diagnostic adverse events. Arch Intern Med 2010;170:1015–21.10.1001/archinternmed.2010.146Search in Google Scholar PubMed

19. Singh, H, Giardina, T, Forjuoh, S, Reis, M, Kosmach, S, Khan, M, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf 2012;21:93–100.10.1136/bmjqs-2011-000304Search in Google Scholar PubMed PubMed Central

20. Zwaan, L, Singh, H. The challenges in defining and measuring diagnostic error. Diagnosis (Berl). 2015;2:97–103.10.1515/dx-2014-0069Search in Google Scholar PubMed PubMed Central

21. Singh, H. Diagnostic errors: moving beyond ‘no respect’ and getting ready for prime time. BMJ Qual Saf 2013;22:789–92.10.1136/bmjqs-2013-002387Search in Google Scholar PubMed PubMed Central

22. Hofer, TP, Kerr, EA, Hayward, RA. What is an error? Eff Clin Pract 2000;3:261–9. PMID:11151522.Search in Google Scholar

23. Donabedian, A. The quality of care. How can it be assessed?. J Am Med Assoc 1988;260:1743–8.10.1001/jama.260.12.1743Search in Google Scholar PubMed

24. Singh, H, Giardina, TD, Meyer, AN, Forjuoh, SN, Reis, MD, Thomas, EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med 2013;173:418–25.10.1001/jamainternmed.2013.2777Search in Google Scholar PubMed PubMed Central

25. Sittig, DF, Singh, H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010;19:i68–74.10.1136/qshc.2010.042085Search in Google Scholar PubMed PubMed Central

26. Henriksen, K, Brady, J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013;22:ii1–5.10.1136/bmjqs-2013-001827Search in Google Scholar PubMed PubMed Central

27. Dhaliwal, G. Web Exclusives. Annals for Hospitalists inpatient notes - diagnostic excellence starts with an incessant watch. Ann Intern Med 2017;167:HO2–3.10.7326/M17-2447Search in Google Scholar PubMed

28. Lane, KP, Chia, C, Lessing, JN, Limes, J, Mathews, B, Schaefer, J, et al. Improving resident feedback on diagnostic reasoning after handovers: the LOOP Project. J Hosp Med 2019;14:622–5.10.12788/jhm.3262Search in Google Scholar PubMed

29. Shenvi, EC, Feupe, SF, Yang, H, El-Kareh, R. “Closing the loop”: a mixed-methods study about resident learning from outcome feedback after patient handoffs. Diagnosis (Berl). 2018;5:235–42.10.1515/dx-2018-0013Search in Google Scholar PubMed PubMed Central

30. Thomas, EJ, Classen, DC. Patient safety: let’s measure what matters. Ann Intern Med 2014;160:642–3.10.7326/M13-2528Search in Google Scholar PubMed

31. Solberg, LI, Mosser, G, McDonald, S. The three faces of performance measurement: improvement, accountability, and research. Jt Comm J Qual Improv 1997;23:135–47.10.1016/S1070-3241(16)30305-4Search in Google Scholar

32. Graber, ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22:ii21-7.10.1136/bmjqs-2012-001615Search in Google Scholar

33. Singh, H, Upadhyay, DK, Torretti, D. Developing health care organizations that pursue learning and exploration of diagnostic excellence: an action plan. Acad Med 2019. Available from: [Accessed 1 Apr 2020].10.1097/ACM.0000000000003062Search in Google Scholar

34. Amelung, D, Whitaker, KL, Lennard, D, Ogden, M, Sheringham, J, Zhou, Y, et al. Influence of doctor-patient conversations on behaviours of patients presenting to primary care with new or persistent symptoms: a video observation study. BMJ Qual Saf 2019. Available from: [Accessed 1 Apr 2020].10.1136/bmjqs-2019-009485Search in Google Scholar

35. Murphy, DR, Meyer, AN, Sittig, DF, Meeks, DW, Thomas, EJ, Singh, H. Application of electronic trigger tools to identify targets for improving diagnostic safety. BMJ Qual Saf 2019;28:151–9.10.1136/bmjqs-2018-008086Search in Google Scholar

36. Institute for Healthcare Improvement and Salzburg Global Seminar. The Salzburg statement on moving measurement into action: global principles for measuring patient safety; 2019. Available from: [Accessed 31 Mar 2020].Search in Google Scholar

37. Smith, MW, Davis, GT, Murphy, DR, Laxmisan, A, Singh, H. Resilient actions in the diagnostic process and system performance. BMJ Qual Saf 2013;22:1006–13.10.1136/bmjqs-2012-001661Search in Google Scholar

38. Murff, HJ, Patel, VL, Hripcsak, G, Bates, DW. Detecting adverse events for patient safety research: a review of current methodologies. J Biomed Inform 2003;36:131–43.10.1016/j.jbi.2003.08.003Search in Google Scholar

39. Thomas, EJ, Petersen, LA. Measuring errors and adverse events in health care. J Gen Intern Med 2003;18:61–7.10.1046/j.1525-1497.2003.20147.xSearch in Google Scholar

40. Rosen, AK. Are we getting better at measuring patient safety? Perspectives on safety; 2010. Available from: [Accessed 1 Apr 2020].Search in Google Scholar

41. Levtzion-Korach, O, Frankel, A, Alcalai, H, Keohane, C, Orav, J, Graydon-Baker, E, et al. Integrating incident data from five reporting systems to assess patient safety: making sense of the elephant. Jt Comm J Qual Patient Saf 2010;36:402–10.10.1016/S1553-7250(10)36059-4Search in Google Scholar

42. Meeks, DW, Meyer, AN, Rose, B, Walker, YN, Singh, H. Exploring new avenues to assess the sharp end of patient safety: an analysis of nationally aggregated peer review data. BMJ Qual Saf 2014;23:1023–30.10.1136/bmjqs-2014-003239Search in Google Scholar PubMed

43. Tejerina, EE, Padilla, R, Abril, E, Frutos-Vivar, F, Ballen, A, Rodriguez-Barbero, JM, et al. Autopsy-detected diagnostic errors over time in the intensive care unit. Hum Pathol 2018;76:85–90.10.1016/j.humpath.2018.02.025Search in Google Scholar PubMed

44. Hautz, WE, Kammer, JE, Hautz, SC, Sauter, TC, Zwaan, L, Exadaktylos, AK, et al. Diagnostic error increases mortality and length of hospital stay in patients presenting through the emergency room. Scand J Trauma Resusc Emerg Med 2019;27:54.10.1186/s13049-019-0629-zSearch in Google Scholar PubMed PubMed Central

45. Gov-Ari, E, Leann Hopewell, B. Correlation between pre-operative diagnosis and post-operative pathology reading in pediatric neck masses--a review of 281 cases. Int J Pediatr Otorhinolaryngol 2015;79:2–7.10.1016/j.ijporl.2014.11.011Search in Google Scholar PubMed

46. Bhise, V, Sittig, DF, Vaghani, V, Wei, L, Baldwin, J, Singh, H. An electronic trigger based on care escalation to identify preventable adverse events in hospitalised patients. BMJ Qual Saf 2018;27:241–6.10.1136/bmjqs-2017-006975Search in Google Scholar PubMed PubMed Central

47. Davalos, MC, Samuels, K, Meyer, AN, Thammasitboon, S, Sur, M, Roy, K, et al. Finding diagnostic errors in children admitted to the PICU. Pediatr Crit Care Med 2017;18:265–71.10.1097/PCC.0000000000001059Search in Google Scholar PubMed

48. Schiff, GD, Puopolo, AL, Huben-Kearney, A, Yu, W, Keohane, C, McDonough, P, et al. Primary care closed claims experience of Massachusetts malpractice insurers. JAMA Intern Med 2013;173:2063–8.10.1001/jamainternmed.2013.11070Search in Google Scholar PubMed

49. Gandhi, TK, Kachalia, A, Thomas, EJ, Puopolo, AL, Yoon, C, Brennan, TA, et al. Missed and delayed diagnoses in the ambulatory setting: a study of closed malpractice claims. Ann Intern Med 2006;145:488–96.10.7326/0003-4819-145-7-200610030-00006Search in Google Scholar PubMed

50. Kachalia, A, Gandhi, TK, Puopolo, AL, Yoon, C, Thomas, EJ, Griffey, R, et al. Missed and delayed diagnoses in the emergency department: a study of closed malpractice claims from 4 liability insurers. Ann Emerg Med 2007;49:196–205.10.1016/j.annemergmed.2006.06.035Search in Google Scholar PubMed

51. Singh, H, Thomas, EJ, Petersen, LA, Studdert, DM. Medical errors involving trainees: a study of closed malpractice claims from 5 insurers. Arch Intern Med 2007;167:2030–6.10.1001/archinte.167.19.2030Search in Google Scholar PubMed

52. Okafor, N, Payne, VL, Chathampally, Y, Miller, S, Doshi, P, Singh, H. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine. Emerg Med J 2015. Available from: [Accessed 2 Apr 2020].10.1136/emermed-2014-204604Search in Google Scholar PubMed

53. Schiff, GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008;121:S38–42. in Google Scholar

54. Shojania, K. The elephant of patient safety: what you see depends on how you look. Jt Comm J Qual Patient Saf. 2010;36:399–401. in Google Scholar

55. Ward, JK, Armitage, G. Can patients report patient safety incidents in a hospital setting? A systematic review. BMJ Qual Saf 2012;21:685–99.10.1136/bmjqs-2011-000213Search in Google Scholar PubMed

56. Scott, J, Heavey, E, Waring, J, De Brun, A, Dawson, P. Implementing a survey for patients to provide safety experience feedback following a care transition: a feasibility study. BMC Health Serv Res 2019;19:613.10.1186/s12913-019-4447-9Search in Google Scholar PubMed PubMed Central

57. Haroutunian, P, Alsabri, M, Kerdiles, FJ, Adel Ahmed Abdullah, H, Bellou, A. Analysis of factors and medical errors involved in patient complaints in a European emergency department. Adv J Emerg Med 2018;2:e4.Search in Google Scholar

58. Gillespie, A, Reader, TW. Patient-centered insights: using health care complaints to reveal hot spots and blind spots in quality and safety. Milbank Q 2018;96:530–67.10.1111/1468-0009.12338Search in Google Scholar PubMed PubMed Central

59. Longtin, Y, Sax, H, Leape, LL, Sheridan, SE, Donaldson, L, Pittet, D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85:53–62.10.4065/mcp.2009.0248Search in Google Scholar PubMed PubMed Central

60. Weingart, SN, Toth, M, Eneman, J, Aronson, MD, Sands, DZ, Ship, AN, et al. Lessons from a patient partnership intervention to prevent adverse drug events. Int J Qual Health Care 2004;16:499–507.10.1093/intqhc/mzh083Search in Google Scholar PubMed

61. Giardina, TD, Haskell, H, Menon, S, Hallisy, J, Southwick, FS, Sarkar, U, et al. Learning from patients’ experiences related to diagnostic errors is essential for progress in patient safety. Health Aff 2018;37:1821–7.10.1377/hlthaff.2018.0698Search in Google Scholar PubMed PubMed Central

62. Smith, K, Baker, K, Wesley, D, Zipperer, L, Clark, MD. Guide to improving patient safety in primary care settings by engaging patients and families: environmental scan report. (Prepared by: MedStar Health Research Institute under Contract No. HHSP233201500022I/HHSP23337002T). Agency for Healthcare Research and Quality. AHRQ Publication: Rockville, MD. No. 17-0021-2-EF; 2017.Search in Google Scholar

63. Huerta, TR, Walker, C, Murray, KR, Hefner, JL, McAlearney, AS, Moffatt-Bruce, S. Patient safety errors: leveraging health information technology to facilitate patient reporting. J Healthc Qual 2016;38:17–23.10.1097/JHQ.0000000000000022Search in Google Scholar PubMed

64. Liberman, AL, Newman-Toker, DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf 2018;27:557–66.10.1136/bmjqs-2017-007032Search in Google Scholar PubMed PubMed Central

65. Michelson, KA, Buchhalter, LC, Bachur, RG, Mahajan, P, Monuteaux, MC, Finkelstein, JA. Accuracy of automated identification of delayed diagnosis of pediatric appendicitis and sepsis in the ED. Emerg Med J 2019;36:736–40.10.1136/emermed-2019-208841Search in Google Scholar PubMed

66. Mahajan, P, Basu, T, Pai, CW, Singh, H, Petersen, N, Bellolio, MF, et al. Factors associated with potentially missed diagnosis of appendicitis in the emergency department. JAMA Netw Open 2020;3:e200612.10.1001/jamanetworkopen.2020.0612Search in Google Scholar PubMed PubMed Central

67. Newman-Toker, DE, Moy, E, Valente, E, Coffey, R, Hines, AL. Missed diagnosis of stroke in the emergency department: a cross-sectional analysis of a large population-based sample. Diagnosis (Berl). 2014;1:155–66.10.1515/dx-2013-0038Search in Google Scholar PubMed PubMed Central

68. Southern, DA, Burnand, B, Droesler, SE, Flemons, W, Forster, AJ, Gurevich, Y, et al. Deriving ICD-10 codes for Patient Safety Indicators for large-scale surveillance using administrative hospital data. Med Care 2017;55:252–60.10.1097/MLR.0000000000000649Search in Google Scholar PubMed

69. Miller, MR, Elixhauser, A, Zhan, C, Meyer, GS. Patient Safety Indicators: using administrative data to identify potential patient safety concerns. Health Serv Res 2001;36:110–32.Search in Google Scholar

70. Bhise, V, Meyer, AND, Singh, H, Wei, L, Russo, E, Al-Mutairi, A, et al. Errors in diagnosis of spinal epidural abscesses in the era of electronic health records. Am J Med 2017;130:975–81.10.1016/j.amjmed.2017.03.009Search in Google Scholar PubMed

71. Singh, H, Daci, K, Petersen, L, Collins, C, Petersen, N, Shethia, A, et al. Missed opportunities to initiate endoscopic evaluation for colorectal cancer diagnosis. Am J Gastroenterol 2009;104:2543–54.10.1038/ajg.2009.324Search in Google Scholar PubMed PubMed Central

72. Agency for Healthcare Research and Quality. Triggers and targeted injury detection systems (TIDS) expert panel meeting: conference summary report. AHRQ Pub: Rockville, MD. No. 090003; 2009. Available from: [Accessed 2 Apr 2020].Search in Google Scholar

73. Mull, HJ, Nebeker, JR, Shimada, SL, Kaafarani, HM, Rivard, PE, Rosen, AK. Consensus building for development of outpatient adverse drug event triggers. J Patient Saf 2011;7:66–71.10.1097/PTS.0b013e31820c98baSearch in Google Scholar PubMed PubMed Central

74. Szekendi, MK, Sullivan, C, Bobb, A, Feinglass, J, Rooney, D, Barnard, C, et al. Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care 2006;15:184–90.10.1136/qshc.2005.014589Search in Google Scholar PubMed PubMed Central

75. Agency for Healthcare Research and Quality. Patient safety primer. Triggers and trigger tools; 2019. Available from: [Accessed 2 Apr 2020].Search in Google Scholar

76. Shenvi, EC, El-Kareh, R. Clinical criteria to screen for inpatient diagnostic errors: a scoping review. Diagnosis (Berl). 2015;2:3–19.10.1515/dx-2014-0047Search in Google Scholar PubMed PubMed Central

77. Singh, H, Thomas, EJ, Khan, MM, Petersen, LA. Identifying diagnostic errors in primary care using an electronic screening algorithm. Arch Intern Med 2007;167:302–8.10.1001/archinte.167.3.302Search in Google Scholar PubMed

78. Murphy, DR, Laxmisan, A, Reis, BA, Thomas, EJ, Esquivel, A, Forjuoh, SN, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf 2014;23:8–16.10.1136/bmjqs-2013-001874Search in Google Scholar PubMed

79. Classen, DC, Pestotnik, SL, Evans, RS, Burke, JP. Computerized surveillance of adverse drug events in hospital patients. J Am Med Assoc 1991;266:2847–51.10.1001/jama.1991.03470200059035Search in Google Scholar

80. Classen, DC, Resar, R, Griffin, F, Federico, F, Frankel, T, Kimmel, N, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff 2011;30:581–9.10.1377/hlthaff.2011.0190Search in Google Scholar PubMed

81. Doupi, P SH, Bjørn, B, Deilkås, E, Nylén, U, Rutberg, H. Use of the Global Trigger Tool in patient safety improvement efforts: nordic experiences. Cogn Technol Work 2015;17:45–54.10.1007/s10111-014-0302-2Search in Google Scholar

82. Classen, D, Li, M, Miller, S, Ladner, D. An electronic health record-based real-time analytics program for patient safety surveillance and improvement. Health Aff 2018;37:1805–12.10.1377/hlthaff.2018.0728Search in Google Scholar PubMed

83. Danforth, KN, Smith, AE, Loo, RK, Jacobsen, SJ, Mittman, BS, Kanter, MH. Electronic clinical surveillance to improve outpatient care: diverse applications within an integrated delivery system. eGEMs 2014;2:1056.10.13063/2327-9214.1056Search in Google Scholar PubMed PubMed Central

84. Murphy, DR, Meyer, AN, Vaghani, V, Russo, E, Sittig, DF, Richards, KA, et al. Application of electronic algorithms to improve diagnostic evaluation for bladder cancer. Appl Clin Inform 2017;8:279–90.10.4338/ACI-2016-10-RA-0176Search in Google Scholar PubMed PubMed Central

85. Murphy, DR, Wu, L, Thomas, EJ, Forjuoh, SN, Meyer, AN, Singh, H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol 2015;33:3560–7.10.1200/JCO.2015.61.1301Search in Google Scholar PubMed PubMed Central

86. Murphy, DR, Thomas, EJ, Meyer, AN, Singh, H. Development and validation of electronic health record-based triggers to detect delays in follow-up of abnormal lung imaging findings. Radiology 2015:142530.10.1148/radiol.2015142530Search in Google Scholar PubMed PubMed Central

87. Singh, H, Hirani, K, Kadiyala, H, Rudomiotov, O, Davis, T, Khan, MM, et al. Characteristics and predictors of missed opportunities in lung cancer diagnosis: an electronic health record-based study. J Clin Oncol 2010;28:3307–15.10.1200/JCO.2009.25.6636Search in Google Scholar PubMed PubMed Central

88. National Quality Forum. Improving diagnostic quality and safety final report. (Developed under department of health and human Services Contract HHSM-500-2012-00009I, task order HHSM-500-t0026). NQF: Washington, DC; 2017. Available from: [Accessed 2 Apr 2020].Search in Google Scholar

89. Young, IJB, Luz, S, Lone, N. A systematic review of natural language processing for classification tasks in the field of incident reporting and adverse event analysis. Int J Med Inform 2019;132:103971.10.1016/j.ijmedinf.2019.103971Search in Google Scholar PubMed

90. Melton, GB, Hripcsak, G. Automated detection of adverse events using natural language processing of discharge summaries. J Am Med Inform Assoc 2005;12:448–57.10.1197/jamia.M1794Search in Google Scholar PubMed PubMed Central

91. Fong, A, Harriott, N, Walters, DM, Foley, H, Morrissey, R, Ratwani, RR. Integrating natural language processing expertise with patient safety event review committees to improve the analysis of medication events. Int J Med Inform 2017;104:120–5.10.1016/j.ijmedinf.2017.05.005Search in Google Scholar PubMed

92. Jagannatha, A, Liu, F, Liu, W, Yu, H. Overview of the first natural language processing challenge for extracting medication, indication, and adverse drug events from electronic health record notes (MADE 1.0). Drug Saf 2019;42:99–111.10.1007/s40264-018-0762-zSearch in Google Scholar PubMed PubMed Central

93. Singh, H, Graber, M, Onakpoya, I, Schiff, GD, Thompson, MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf 2016;26:484–94.10.1136/bmjqs-2016-005401Search in Google Scholar PubMed PubMed Central

94. Al-Mutairi, A, Meyer, AN, Thomas, EJ, Etchegaray, JM, Roy, KM, Davalos, MC, et al. Accuracy of the Safer Dx Instrument to identify diagnostic errors in primary care. J Gen Intern Med 2016;31:602–8.10.1007/s11606-016-3601-xSearch in Google Scholar PubMed PubMed Central

95. Cifra, CL, Ten Eyck, P, Dawson, JD, Reisinger, HS, Singh, H, Herwaldt, LA. Factors associated with diagnostic error on admission to a PICU: a pilot study. Pediatr Crit Care Med 2020. Available from: [Accessed 2 Apr 2020].10.1097/PCC.0000000000002257Search in Google Scholar PubMed PubMed Central

96. Bergl, PA, Taneja, A, El-Kareh, R, Singh, H, Nanchal, RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med 2019;47:e902–10.10.1097/CCM.0000000000003976Search in Google Scholar PubMed

97. Singh, H, Khanna, A, Spitzmueller, C, Meyer, A. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6:315–23.10.1515/dx-2019-0012Search in Google Scholar PubMed

98. Mathews, BK, Fredrickson, M, Sebasky, M, Seymann, G, Ramamoorthy, S, Vilke, G, et al. Structured case reviews for organizational learning about diagnostic vulnerabilities: initial experiences from two medical centers. Diagnosis (Berl). 2020;7:27–35.10.1515/dx-2019-0032Search in Google Scholar PubMed

99. Schiff, GD, Hasan, O, Kim, S, Abrams, R, Cosby, K, Lambert, BL, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009;169:1881–7.10.1001/archinternmed.2009.333Search in Google Scholar PubMed

100. Reilly, JB, Myers, JS, Salvador, D, Trowbridge, RL. Use of a novel, modified fishbone diagram to analyze diagnostic errors. Diagnosis (Berl). 2014;1:167–71.10.1515/dx-2013-0040Search in Google Scholar PubMed

101. Graber, ML, Rencic, J, Rusz, D, Papa, F, Croskerry, P, Zierler, B, et al. Improving diagnosis by improving education: a policy brief on education in healthcare professions. Diagnosis (Berl). 2018;5:107–18. in Google Scholar PubMed

102. Henriksen, K, Dymek, C, Harrison, MI, Brady, PJ, Arnold, SB. Challenges and opportunities from the Agency for Healthcare Research and Quality (AHRQ) research summit on improving diagnosis: a proceedings review. Diagnosis (Berl). 2017;4:57-66.10.1515/dx-2017-0016Search in Google Scholar PubMed

103. Ohio Hospital Association. OPSI offers free diagnostic errors webinar on Sept. 19; 2018. Available from: [Accessed 2 Apr 2020].Search in Google Scholar

104. Medford-Davis, L, Park, E, Shlamovitz, G, Suliburk, J, Meyer, AN, Singh, H. Diagnostic errors related to acute abdominal pain in the emergency department. Emerg Med J 2016;33:253–9.10.1136/emermed-2015-204754Search in Google Scholar PubMed

105. Singh, H, Graber, M. Reducing diagnostic error through medical home-based primary care reform. J Am Med Assoc 2010;304:463–4.10.1001/jama.2010.1035Search in Google Scholar PubMed PubMed Central

106. Sittig, DF, Singh, H. Toward more proactive approaches to safety in the electronic health record era. Jt Comm J Qual Patient Saf 2017;43:540–7. in Google Scholar PubMed PubMed Central

107. Institute for Healthcare Improvement/National Patient Safety Foundation. Closing the loop: a guide to safer ambulatory referrals in the EHR era. IHI: Cambridge, MA; 2017. Available from: [Accessed 2 Apr 2020].Search in Google Scholar

108. Agency for Healthcare Research and Quality. Team STEPPS; 2019. Available from: [Accessed 2 Apr 2020].Search in Google Scholar

109. Bates, DW, Singh, H. Two decades since to Err Is Human: an assessment of progress and emerging priorities in patient safety. Health Aff 2018;37. Available from: [Accessed 2 Apr 2020].10.1377/hlthaff.2018.0738Search in Google Scholar PubMed

110. Meyer, AND, Singh, H. The path to diagnostic excellence includes feedback to calibrate how clinicians think. J Am Med Assoc 2019;321:737–8. in Google Scholar PubMed

111. Meyer, AN, Payne, VL, Meeks, DW, Rao, R, Singh, H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013;173:1952–8.10.1001/jamainternmed.2013.10081Search in Google Scholar PubMed

112. Schiff, GD, Martin, SA, Eidelman, DH, Volk, LA, Ruan, E, Cassel, C, et al. Ten principles for more conservative, care-full diagnosis. Ann Intern Med 2018;169:643–5.10.7326/M18-1468Search in Google Scholar

113. Cifra, CL, Jones, KL, Ascenzi, JA, Bhalala, US, Bembea, MM, Newman-Toker, DE, et al. Diagnostic errors in a PICU: insights from the morbidity and mortality conference. Pediatr Crit Care Med 2015;16:468–76.10.1097/PCC.0000000000000398Search in Google Scholar

114. Gupta, A, Snyder, A, Kachalia, A, Flanders, S, Saint, S, Chopra, V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf 2017;27. Available from: [Accessed 2 Apr 2020].10.1136/bmjqs-2017-006774Search in Google Scholar

115. Shojania, KG, Burton, EC, McDonald, KM, Goldman, L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. J Am Med Assoc 2003;289:2849–56.10.1001/jama.289.21.2849Search in Google Scholar

116. Walton, MM, Harrison, R, Kelly, P, Smith-Merry, J, Manias, E, Jorm, C, et al. Patients’ reports of adverse events: a data linkage study of Australian adults aged 45 years and over. BMJ Qual Saf 2017;26:743–50.10.1136/bmjqs-2016-006339Search in Google Scholar

117. Fowler, FJJr, Epstein, A, Weingart, SN, Annas, CL, Bolcic-Jankovic, D, Clarridge, B, et al. Adverse events during //hospitalization: results of a patient survey. Jt Comm J Qual Patient Saf 2008;34:583–90.10.1016/S1553-7250(08)34073-2Search in Google Scholar

Received: 2020-04-13
Accepted: 2020-04-18
Published Online: 2020-07-24
Published in Print: 2021-02-23

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 24.9.2023 from
Scroll to top button