Reducing the incidence of diagnostic errors is increasingly a priority for government, professional, and philanthropic organizations. Several obstacles to measurement of diagnostic safety have hampered progress toward this goal. Although a coordinated national strategy to measure diagnostic safety remains an aspirational goal, recent research has yielded practical guidance for healthcare organizations to start using measurement to enhance diagnostic safety. This paper, concurrently published as an Issue Brief by the Agency for Healthcare Research and Quality, issues a “call to action” for healthcare organizations to begin measurement efforts using data sources currently available to them. Our aims are to outline the state of the science and provide practical recommendations for organizations to start identifying and learning from diagnostic errors. Whether by strategically leveraging current resources or building additional capacity for data gathering, nearly all organizations can begin their journeys to measure and reduce preventable diagnostic harm.
Diagnostic errors pose a significant threat to patient safety, resulting in substantial preventable morbidity and mortality and excess healthcare costs . Diagnostic errors are also among the most frequent reasons for medical malpractice claims . In 2015, the National Academies of Sciences, Engineering, and Medicine (NASEM) highlighted the scope and significance of diagnostic safety in its report Improving Diagnosis in Health Care . Among NASEM’s recommendations is a call for accrediting bodies to require healthcare organizations (HCOs) to “monitor the diagnostic process and identify, learn from, and reduce diagnostic errors and near misses in a timely fashion” .
Measurement of diagnostic performance is necessary for any systematic effort to improve diagnostic quality and safety. While numerous healthcare performance measures exist (the National Quality Forum [NQF] currently endorses hundreds of measures) , , none is being used routinely to assess and address diagnostic errors. Only a few U.S. healthcare organizations have explored measurement of diagnostic safety, and the development of diagnostic safety measures remains in its infancy . However, diagnostic errors are increasingly prominent in the national conversation on patient safety.
Several stakeholders have recently launched initiatives and research projects to advance development and implementation of diagnostic safety measurement , , , , . These stakeholders include the Agency for Healthcare Research and Quality, the NQF, the Centers for Medicare & Medicaid Services, the Society to Improve Diagnosis in Medicine, and philanthropic foundations such as the Gordon and Betty Moore Foundation. It is thus reasonable to expect that HCOs will face increasing expectations to measure and improve diagnostic safety as part of their quality and safety programs.
In parallel with calls for improving diagnostic safety is a growing emphasis on the concept of a learning health system (LHS). LHSs can be conceptualized at various levels, including the care team level, the HCO level, the external level (e.g., State or national), or within a specific improvement initiative or collaborative. In LHSs, leaders are committed to improvement, outcomes are systematically gathered, variation in care within the system is systematically analyzed to inform and improve care, and a continuous feedback cycle facilitates quality improvement based on evidence .
Rigorous measurement of quality and safety should be an essential component of an LHS , . Further, measurement is only valuable to the extent that it is actionable and leads to improved care delivery and outcomes . New and emerging measures of diagnostic safety should therefore be evaluated not only in terms of their validity but also their potential to inform pragmatic strategies to improve diagnosis . At present, most of the tools and strategies that organizations use to detect patient safety concerns cannot always specifically detect diagnostic error, and even when these errors are uncovered, analysis and learning are limited. 
Although a coordinated national strategy to measure diagnostic safety remains an aspirational goal, recent research has yielded practical guidance for HCOs to start using measurement to enhance diagnostic safety. Measurement strategies validated in research settings are approaching readiness for implementation in an operational context. Equipped with this knowledge, HCOs have an opportunity to develop and implement strategies to learn from diagnostic safety events within their own walls.
In this Issue Brief, we discuss the state of the science of operational measurement of diagnostic safety, informed by recent peer-reviewed scientific publications, innovations in real-world healthcare settings, and initiatives to spur further development of diagnostic safety measures. Our aim is to provide knowledge and recommendations to encourage HCOs to begin to identify and learn from diagnostic errors.
Special considerations for measurement of diagnostic safety
Defining diagnostic errors and diagnostic performance
Measurement begins with a definition. The recent NASEM report defines diagnostic error as the “failure to establish an accurate and timely explanation of the patient’s health problem(s) or communicate that explanation to the patient” . This definition provides three key concepts that need to be operationalized, namely:
Accurately identifying the explanation (or diagnosis) of the patient’s problem,
Providing this explanation in a timely manner, and
Effectively communicating the explanation.
Diagnostic performance can be defined not only by accuracy and timeliness but also by efficiency (e.g., minimizing resource expenditure and limiting the patient’s exposure to risk) . Measurement of diagnostic performance should hence consider the broader context of value-based care, including quality, risks, and costs, rather than focus simply on achieving the correct diagnosis in the shortest time , .
Understanding the multifactorial context of diagnostic safety
Diagnostic errors are best conceptualized as events that may be distributed over time and place and shaped by multiple contextual factors and complex dynamics involving system-related, patient-related, team-related, and individual cognitive factors. Availability of clinical data that provide a longitudinal picture across care settings is essential to understanding a patient’s diagnostic journey.
A multidisciplinary framework, the Safer Dx framework  (Figure 1), has been proposed to help advance the measurement of diagnostic errors. The framework follows the Donabedian Structure–Process–Outcome model , which approaches quality improvement in three domains:
Structure (characteristics of care providers, their tools and resources, and the physical/organizational setting);
Process (both interpersonal and technical aspects of activities that constitute healthcare); and
Outcome (change in the patient’s health status or behavior).
The recent NASEM report adapted concepts from the Safer Dx framework and other sources to generate a similar framework that emphasizes system-level learning and improvement .
Measurement must account for all aspects of the diagnostic process as well as the distributed nature of diagnosis, i.e., evolving over time and not limited to what happens during a single clinician-patient visit. The five interactive process dimensions of diagnosis include:
Patient-clinician encounter (history, physical examination, ordering of tests/referrals based on assessment);
Performance and interpretation of diagnostic tests;
Followup and tracking of diagnostic information over time;
Subspecialty and referral-specific factors; and
Patient-related factors .
The Safer Dx framework is interactive and acts as a continuous feedback loop. It includes the complex adaptive sociotechnical work system in which diagnosis takes place (structure), the process dimensions in which diagnoses evolve beyond the doctor’s visit (process), and resulting outcomes of “safe diagnosis,” (i.e., correct and timely), as well as patient and healthcare outcomes (outcomes). Valid and reliable measurement of diagnostic errors results in collective mindfulness, organizational learning, improved collaboration, and better measurement tools and definitions. In turn, these proximal outcomes enable overall safer diagnosis, which contributes to both improved patient outcomes and value of healthcare. The knowledge created by measurement will also lead to changes in policy and practice to reduce diagnostic errors as well as feedback for improvement.
Understanding these special considerations for measurement is essential to avoid confusion with other care processes (e.g., screening, prevention, management, or treatment). It also keeps diagnostic safety as a concept distinct from other common concerns such as communication breakdowns, readmissions, and care transitions, which may be either contributors or outcomes related to diagnostic errors. The Safer Dx framework underscores that diagnostic errors can emerge across multiple episodes of care and that corresponding clinical data across the care continuum are essential to inform measurement. However, in reality these data are not always available.
Providers and HCOs, often unaware of their patients’ ultimate diagnosis-related outcomes, could benefit from measurement of diagnostic performance, which can then contribute to better system-level understanding of the problem and enable improvement through feedback and learning , , . Developing and implementing measurement methods that focus on what matters is consistent with LHS approaches . In the absence of robust and accurate measures for diagnostic safety or any current external accountability metric, HCOs should focus on implementing measurement strategies that can lead to actionable data for improvement efforts .
Choosing data sources for measurement
Measurement must account for real-world clinical practice, taking into consideration not only the individual clinician’s decision making and reasoning but also the influence of systems, team members, and patients on the diagnostic process. As a first step, safety professionals need access to both reliable and valid data sources, ideally across the longitudinal continuum of patient care, as well as pragmatic tools to help measure and address diagnostic error.
Many methods have been suggested to study diagnostic errors . While most of them have been evaluated in research settings, few HCOs take a systematic approach to measure or monitor diagnostic error in routine clinical care . However, with appropriate tools and guidance, all HCOs should be able to adopt systematic approaches to measure and learn about diagnostic safety.
Not all methods to measure diagnostic performance are feasible to implement given limited time and resources. For instance, direct video observations of encounters between clinicians and patients would be excellent, but these are expensive and difficult to sustain on the scale needed for systemwide change . In contrast, a more viable alternative for most HCOs is to leverage existing data sources, such as large electronic health record (EHR) data repositories.
EHRs can be useful for measuring both discrete (and perhaps relatively uncommon) events and risks that are systematic, recurrent, or outside clinicians’ awareness. For example, HCOs with the resources to query or mine electronic data repositories have options for both retrospective and prospective measurements of diagnostic safety .
Even in the absence of standardized methods for measuring diagnostic error, HCOs should measure, learn from, and intervene to address threats to diagnostic safety by following basic measurement principles. For instance, the recently released Salzburg Statement on Moving Measurement Into Action was published through a collaboration between the Salzburg Global Seminar and the Institute for Healthcare Improvement. The statement recognizes the absence of standard and effective safety measures and provides eight broad principles for patient safety measurement. These principles act as “a call to action for all stakeholders in reducing harm, including policymakers, managers and senior leaders, researchers, health care professionals, and patients, families, and communities” .
In the same spirit, we propose several general assumptions that should guide and facilitate institutional practices for measurement of diagnostic safety and lead to actionable intelligence that identifies opportunities for improvement:
The underlying motivation for measurement is to learn from errors and improve clinical operations, as opposed to merely responding to external incentives or disincentives.
Efforts should focus on conditions that have been shown to be relatively common in being missed, which leads to patient harm.
Measurement should not be used punitively to identify provider failures but rather should be used to uncover system-level problems . Information should be used to inform system fixes and related organizational processes and procedures that constitute the sociotechnical context of diagnostic safety.
Hindsight bias, i.e., when knowledge of the outcome significantly affects perception of past events, is inherent in most analysis and will need to be minimized by focusing on how missed opportunities can become learning opportunities.
With this general guidance in mind, we recommend that HCOs begin to monitor diagnostic safety using the most robust data sources currently available. Table 1 describes various data sources and strategies that could enable measurement of diagnostic error, along with their respective strengths and limitations. The following sections describe these data sources and strategies and the evidence to support their use for measurement of diagnostic safety.
|Data source||What can be learned||Approaches to operationalizing data collection||Approaches to operationalizing data synthesis||Strengths||Limitations||Examples|
|Routinely recorded quality and safety events|
|Solicited clinician reports|
|Solicited patient reports|
|Administrative billing data|
|Medical records (random, selective, and e-trigger enhanced)|
|Advanced data science methods, including natural language processing of EHR data*|
Note: Approaches marked with an asterisk (*) have potential for future application but are not currently developed for real-time surveillance.
Learning from known incidents and reports
No single data source will capture the full range of diagnostic safety concerns. Valuable information can be gleaned from even limited data sources so long as those who use the data remain mindful of its limitations for a given purpose . For instance, many routinely recorded discrete events lend themselves to retrospective analysis of diagnostic safety. Most healthcare organizations have incident reporting systems, although reporting has included few diagnostic events .
There is also an opportunity to leverage peer review programs to improve diagnostic self-assessment, feedback, and improvement . Similarly, autopsy reports , diagnostic discrepancies at admission versus discharge , , escalations of care , , and malpractice claims , , ,  may be reviewed with special attention to opportunities to improve diagnosis. These data sources may not shed light on the frequency or scope of a problem, but they can help raise awareness of the impact and harm of diagnostic errors and, in some cases, specific opportunities for improvement.
Voluntary reports solicited specifically from clinicians who make diagnoses are another potentially useful source of data on diagnostic safety. For example, reports from clinicians who have witnessed diagnostic error have the advantage of rich detail that, at least in some cases, may offer insight into ways to prevent or mitigate future errors. However, no standardized mechanisms exist to report diagnostic errors. Despite widespread efforts to enable providers to report errors , , clinicians find reporting tools onerous and are often unaware of errors they make .
It has also become clear that a local champion and quality improvement team support are needed to sustain reporting behavior. At present, few facilities or organizations allocate “protected time” that is essential for clinicians to report, analyze, and learn from safety events. Some of these challenges could be overcome by having frontline clinicians report events briefly and allowing organizational safety teams (which include other clinicians) to analyze them. Still, voluntary reporting alone cannot address the multitude of complex diagnostic safety concerns, and reporting can only be one aspect of a comprehensive measurement strategy .
Patients are often underexplored sources of information, and many organizations already conduct patient surveys, interviews, and reviews of patient complaints to learn about safety risks , , , . Prior work in other areas of patient safety (e.g., medication errors, infection control) has examined the potential of engaging patients proactively to monitor safety risks and problems , . With subsequent development, similar mechanisms could be used to monitor diagnosis-related issues.
One limitation of current patient reporting systems is the lack of validated patient-reported questions or methods to detect diagnostic safety concerns. Barriers to patient engagement in safety initiatives, including low health literacy, lack of wider acceptance of safety monitoring as part of the patient role, provider expectations and attitudes, and communication differences, will also need to be addressed to make the most of such efforts , , . Real-time adverse event and “near-miss” reporting systems, akin to those intended for use by clinicians, are another potential mechanism to collect patient-reported data on diagnostic safety .
Learning from existing large datasets
Whereas direct reports from patients and clinicians may offer unique insights, HCOs may also be ready to use large datasets to uncover trends in diagnostic performance and events that are not otherwise easily identified. Administrative and billing data are widely available in most modern HCOs and have been proposed as one source of data for detecting missed opportunities for accurate and timely diagnosis , , . For example, diagnosis codes assigned at successive clinical encounters may be used as a proxy for the evolution of a clinical diagnosis; if significant discrepancies are found, it may lead to a search for reasons . Using symptom-disease-based dyads, such as abdominal pain followed by appendicitis a few days later or dizziness followed by stroke, are examples of this approach , .
This strategy may be intuitive to patient safety leaders because several other safety measurement methods are also based on diagnosis codes extracted from administrative datasets (e.g., patient safety indicators) , . However, unlike specific safety events that can be coded with good sensitivity (e.g., retention of foreign objects during procedures), administrative data are not sufficiently sensitive to detect diagnostic errors. Moreover, administrative data lack relevant clinical details about diagnostic processes that can be improved. Administrative data sources are mainly useful insofar as they can be used to identify patterns or specific cohorts of patients to further review for presence of diagnostic error, based on diagnosis or other relevant characteristics that may be considered risk factors or of special interest for diagnostic safety improvement.
Medical records can be a rich source of data, as they contain clinical details and reflect the patient’s longitudinal care journey. Although medical record reviews are considered valuable and sometimes even gold standard for detecting diagnostic errors, it is often not clear which records to review. Reviewing records at random can be burdensome and resource intensive. However, more selective methods can identify a high-yield subset (e.g., reviewing records of patients diagnosed with specific clinical conditions at high risk of being missed, such as colorectal cancer or spinal epidural abscess) , . Such selective methods can be more efficient compared with voluntary reporting or nonselective or random manual review.
Another way to select records is through the “trigger” approach, which aims to “alert patient safety personnel to possible adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred” , , , . EHRs and clinical data warehouses make it possible to identify signals suggestive of missed diagnosis prior to detailed reviews. HCOs can search EHRs on a scale that would be untenable using manual or random search methods using electronic algorithms, or “e-triggers,” which mine vast amounts of clinical and administrative data to identify these signals , , .
For example, algorithms could identify patients with a certain type of abnormal test result (denominator) and identify which results have still not been acted upon after a certain length of time (numerator) . This type of algorithm is possible because the data about an abnormal result (e.g., abnormal hemoglobin value and microcytic anemia) and the followup action needed (e.g., colonoscopy for the 65-year-old) are (or should be) coded in the EHR. Similarly, certain patterns, such as unexpected hospitalizations after a primary care visit, can be identified more accurately if corresponding clinical data are available .
To enhance the yield of record reviews, e-triggers can be developed to alert personnel to potential patient safety events and enable targeted review of high-risk patients . The Institute for Healthcare Improvement’s Global Trigger Tools , which include both manual and electronic trigger tools to detect inpatient events , , , are widely used but were not specifically designed to detect diagnostic errors. More targeted e-triggers for diagnostic safety measurement and monitoring can be developed and integrated within existing patient safety surveillance systems in the future and may enhance the yield of record reviews , , , , . Other examples of possible e-triggers that are either in development or approaching wider testing and implementation are described in Table 2.
|Safer Dx diagnostic process||Safer Dx trigger example||Potential diagnostic error|
|Patient-provider encounter||Emergency department or primary care visit followed by unplanned hospitalization||Missed red flag findings or incorrect diagnosis during initial office visit|
|ED/PC visit within 72 h after ED or hospital discharge||Missed red flag findings during initial ED/PC or hospital visit|
|Unexpected transfer from hospital general floor to ICU within 24 h of ED admission||Missed red flag findings during admission|
|Performance and interpretation of diagnostic tests||Amended imaging report||Missed findings on initial read or lack of communication of amended findings|
|Follow-up and tracking of diagnostic information||Abnormal test result with no timely followup action||Abnormal test result missed|
|Referral-related factors||Urgent specialty referral followed by discontinued referral within 7 days||Delay in diagnosis from lack of specialty expertise|
|Patient-related factors||Poor rating on patient experience scores post ED/PC visit||Patient report of communication barriers related to missed diagnosis|
Both selective and e-trigger enhanced reviews are advantageous because they can allow development of potential measures for wider testing and adoption. For instance, measures of diagnostic test result follow-up may focus specifically on abnormal test results that are suggestive of serious or time-sensitive diagnoses, such as cancer . Examples of diagnostic safety measures could include the proportion of documented “red-flag” symptoms or test results that receive timely follow-up, or the proportion of patients with cancer newly diagnosed within 60 days of first presentation of known red flags . Depending on an HCO’s priorities, safety leaders could consider additional development, testing, and potential implementation of measure concepts proposed by NQF and other researchers , , .
Other mechanisms for mining clinical data repositories, such as natural language processing (NLP) algorithms , are too early in development but may be a useful complement to future e-triggers for detection of diagnostic safety events. Whereas e-triggers leverage structured data to identify possible safety concerns, a substantial proportion of data in EHRs is unstructured and therefore inaccessible to e-triggers. NLP and machine-learning techniques could help analyze and interpret large volumes of unstructured textual data, such as those in clinical narratives and free-text fields, and reduce the burden of medical record reviews by selecting records with potentially highest yield. While research has examined the predictive validity of NLP algorithms for detection of safety incidents and adverse events , , , to date this methodology has not been applied to or validated for measurement of diagnostic safety.
Synthesizing data and enhancing confidence in measurement
Determining the presence of diagnostic error is complex and requires additional evaluation for missed opportunities . A binary classification (e.g., error or no error) may be insufficient for cases involving greater uncertainty, which call for more graded assessment approaches reflecting varying degrees of confidence in the determination of error , .
Depending on the measurement method, a thorough content review may be sufficient to identify missed opportunities for diagnosis, contributing factors, and harm or impact. However, systematic approaches to surveillance and measurement of diagnostic safety often warrant the use of structured data collection instruments ,  that assess diagnostic errors using objective criteria, as well as taxonomies to classify process breakdowns. For example, the Revised Safer Dx Instrument  is a validated tool that can be used to do an initial screen for presence or absence of diagnostic error in a case. It helps users identify potential diagnostic errors in a standardized way for further analysis and safety improvement efforts.
Structured assessment instruments can also be used to provide clinicians with data for feedback and reflection that they otherwise may not receive . Furthermore, process breakdowns can be analyzed using approaches such as the Diagnostic Error Evaluation and Research taxonomy  or the Safer Dx Process breakdown taxonomy , both of which help to identify where in the diagnostic process a problem occurred. Other factors related to diagnostic error, such as the presence of patient harm (e.g., clear evidence of harm versus “near-misses”), preventability, and actionability, may also be important to define in advance so that the selected measurement strategy aligns with the learning and improvement goals. Nevertheless, the science of understanding the complex mix of cognitive and sociotechnical contributory factors and implementing effective solutions based on these data still needs additional development , , .
Getting ready for measurement: overcoming barriers and taking next steps
HCOs are resource constrained, and efforts to measure quality and safety of healthcare are anchored foremost to the measures specifically required by accrediting agencies and payers. Currently, measures of diagnostic performance and safety are not among these required measures, and the burden of additional data collection is a daunting prospect for HCOs that already struggle to meet existing requirements. Furthermore, we need national consensus regarding what aspects of diagnostic safety can be measured pragmatically in real-world care settings.
The current lack of incentives and models to measure diagnostic safety can make it difficult to get started even in the best-case scenario when leadership support and resources are available to support improvement efforts focused on diagnosis. Nevertheless, as the burden of diagnostic errors is increasingly recognized and as measurement strategies become better defined, diagnostic safety will no longer remain sidelined in the landscape of healthcare quality and safety.
Despite the additional burden that measurement for improvement would entail, the benefit of these activities for improving patient safety and stimulating learning and feedback could outweigh concerns. In fact, some HCOs have already started on their journeys, including one that has pursued measurement on multiple fronts  and another that aims to pursue measurement as part of becoming a “Learning and Exploration of Diagnostic Excellence (LEDE)” organization . Now is the time for other HCOs to use the data already available to them to begin to detect, understand, and learn from diagnostic errors.
While diagnostic errors occur across the spectrum of medical practice, measurement should be strategic and focused on areas with strong potential for learning and impact. As with other aspects of patient safety measurement, goals should include:
Creating learning opportunities from past events with both potential and real harm,
Ensuring reliability in diagnostic safety,
Anticipating and preparing for problems related to diagnosis, and
Integrating and learning from the knowledge generated .
Published literature (e.g., research showing a high rate of missed test results),
Priorities identified in national initiatives (e.g., timely diagnosis of cancer), or
Once a target is identified, HCOs should use measurement strategies that balance validity (i.e., for case finding) with yield. Figure 2 visualizes the implementation readiness of several diagnostic safety measurement strategies discussed in this brief. For example, e-trigger enhanced structured chart review appears suitable for operational measurement with additional development. Systems without EHR capabilities can use other data sources (e.g., selective reviews, event reports) to begin measurement as soon as possible, rather than wait for the information technology infrastructure needed to implement EHR-based algorithms. The strategies depicted in Figure 2 represent the current state of the science and are subject to change as new innovations are validated.
Table 3 presents a summary of the strategies and the factors affecting their use.
|Measurement strategy||Stage of development||Current potential availability and/or accessibility of data source||Estimated yield relative to effort|
|Review of solicited reports from patients||Exploratory||Low||Medium|
|Advanced data science methods using EHR data (e.g., NLP)||Exploratory||Low||Very large|
|Mining administrative billing data||Exploratory||High||Very small|
|E-trigger enhanced chart review||Moderate||Moderate||Very large|
|Institutional peer review processes||Moderate||High||Medium|
|Morbidity and mortality conferences||Moderate||High||Medium|
|Review of solicited brief reports from clinicians||Moderate||Moderate||Very large|
|Selective chart review of high-risk cohorts||Mature||High||Large|
|Random chart review||Mature||High||Very small|
|Review of autopsy reports||Mature||Low||Large|
|Review of malpractice claims||Mature||High||Medium|
|Review of incident reports||Mature||High||Small|
Based on what HCOs are learning, they could conduct additional activities related to quality improvement, such as:
Better managing test results to make sure they are acted on ,
Conducting a self-assessment of reliability of communication and reporting of test results ,
Closing referral loops , and
Enhancing teamwork and communication with patients .
Meanwhile, research should stimulate the development of more rigorous measures that are better correlated with not just more timely and accurate diagnosis but also less preventable diagnostic harm.
For additional implementation and adoption, especially beyond measurement to inform institutional quality improvement activities that we discuss in this brief, additional balance measures will need to be developed. For example, measures that focus on underdiagnosis of a certain condition (stroke) may lead to overtesting for that condition (magnetic resonance imaging on all patients with dizziness in primary or emergent care). Thus, research efforts must continue to inform and strengthen the rigor of measurement.
Other unintended consequences, including gaming and measure fatigue, may emerge from implementing suboptimal and inadequately validated performance measures for public reporting, performance incentives, or penalties. For measures to be used for accountability, they will first need to be tested and validated and linked to improving suboptimal care or outcomes . This document and its recommendations thus focus largely on measurement for improvement at the HCO level and not measurement for accountability purposes.
In the long term, measures will need to be developed to address diagnostic excellence, defined in terms of the ability to make a diagnosis using the fewest resources while maximizing patient experiences, managing and communicating uncertainty to patients, and tolerating watchful waiting when unfocused treatment may be harmful . As measurement methods evolve, they should address concepts such as uncertainty and clinician calibration (the degree to which clinicians’ confidence in the accuracy of their diagnostic decision making aligns with their actual accuracy ). Understanding and measuring these concepts in the diagnostic safety equation is essential to optimize the balance between reducing overuse and addressing underuse of diagnostic tests and other resources .
While measurement of diagnostic error has been challenging, recent research and early implementation in operational settings have provided new evidence that can now stimulate progress. Whether by leveraging current resources or building capacity for additional data gathering, HCOs have a variety of options to begin measurement to reduce preventable diagnostic harm. The measurements can be implemented progressively and, eventually, can be supported by payers and accrediting organizations. This overview is a call to action for leaders within both large and small HCOs in any setting to assess how they could begin their journey to measure and reduce preventable diagnostic harm and then to act on that assessment.
Funding source: Agency for Healthcare Research and Quality
Award Identifier / Grant number: 75P00119R00265
This article is being published concurrently with AHRQ Publication No. 20-0040-1-EF. Singh H, Bradford A, Goeschel C. Operational Measurement of Diagnostic Safety: State of the Science. AHRQ Publication #20-0040-1-EF. Rockville, MD: Agency for Healthcare Research and Quality; April 2020. https://www.ahrq.gov/sites/default/files/wysiwyg/patient-safety/reports/issue-briefs/state-of-science.pdf.
Research funding: This paper was funded under Contract No. HHSP233201500022I/75P00119F37006 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this document’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this product as an official position of AHRQ or of the U.S. Department of Health and Human Services. Dr. Singh is funded in part by the Houston Veterans Administration (VA) Health Services Research and Development (HSR&D) Center for Innovations in Quality, Effectiveness, and Safety (CIN13-413), the VA HSR&D Service (CRE17-127 and the Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety, the Agency for Healthcare Research and Quality (R01HS27363), and the Gordon and Betty Moore Foundation.
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Competing interests: The funding organization(s) reviewed the contents of the issue brief but otherwise had no role in the writing of the report; or in the decision to submit the report for publication. The views expressed in this article do not represent the views of the U.S. Department of Veterans Affairs or the United States government.
2. Saber Tehrani, AS, Lee, H, Mathews, SC, Shore, A, Makary, MA, Pronovost, PJ, et al. 25-year summary of U.S. malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf 2013;22:672–80.10.1136/bmjqs-2012-001550Search in Google Scholar
3. National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care; 2015. Available from: http://iom.nationalacademies.org/Reports/2015/Improving-Diagnosis-in-Healthcare.aspx [Accessed 31 Mar 2020].Search in Google Scholar
5. National Quality Forum. Measures, reports & tools; 2020. Available from: https://www.qualityforum.org/Measures_Reports_Tools.aspx [Accessed 31 Mar 2020].Search in Google Scholar
7. Gordon and Betty Moore Foundation. New projects aim to develop clinical quality measures to improve diagnosis; 2020. Available from: https://www.moore.org/article-detail?newsUrlName=new-projects-aim-to-develop-clinical-quality-measures-to-improve-diagnosis [Accessed 31 Mar 2020].Search in Google Scholar
8. National Quality Forum. Reducing diagnostic error: measurement considerations; 2019. Available from: https://www.qualityforum.org/ProjectDescription.aspx?projectID=90704 [Accessed 31 Mar 2020].Search in Google Scholar
9. Agency for Healthcare Research and Quality. Diagnostic safety and quality; 2020. Available from: https://www.ahrq.gov/topics/diagnostic-safety-and-quality.html [Accessed 31 Mar 2020].Search in Google Scholar
10. Centers for Medicare & Medicaid Services. TEP current panels; 2020. Available from: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/TEP-Current-Panel#a0214-1 [Accessed 31 Mar 2020].Search in Google Scholar
11. Health Research & Educational Trust. Improving diagnosis in medicine. Diagnostic error change package; 2018. Available from: https://www.improvediagnosis.org/wp-content/uploads/2018/11/improving-diagnosis-in-medicine-change-package-11-8.pdf [Accessed 1 Apr 2020].Search in Google Scholar
12. Olsen, LA, Aisner, D, McGinnis, JM, eds. Institute of medicine (US) roundtable on evidence-based medicine. The learning healthcare system: workshop summary. The national Academies collection: reports funded by national institutes of health. National Academies Press: Washington, DC; 2007. Available from: https://www.ncbi.nlm.nih.gov/pubmed/21452449 [Accessed 1 Apr 2020].Search in Google Scholar
13. Vincent, C, Burnett, S, Carthey, J. Safety measurement and monitoring in healthcare: a framework to guide clinical teams and healthcare organisations in maintaining safety. BMJ Qual Saf 2014;23:670–7.10.1136/bmjqs-2013-002757Search in Google Scholar
14. Singh, H, Sittig, DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf 2015;24:103–10.10.1136/bmjqs-2014-003675Search in Google Scholar
15. McGlynn, EA, McDonald, KM, Cassel, CK. Measurement is essential for improving diagnosis and reducing diagnostic error: a report from the Institute of Medicine. J Am Med Assoc 2015;314:2501–2.10.1001/jama.2015.13453Search in Google Scholar
17. Graber, ML, Trowbridge, RL, Myers, JS, Umscheid, CA, Strull, W, Kanter, MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014;40:102–10.10.1016/S1553-7250(14)40013-8Search in Google Scholar
18. Zwaan, L, de Bruijne, M, Wagner, C, Thijs, A, Smits, M, van der Wal, G, et al. Patient record review of the incidence, consequences, and causes of diagnostic adverse events. Arch Intern Med 2010;170:1015–21.10.1001/archinternmed.2010.146Search in Google Scholar PubMed
19. Singh, H, Giardina, T, Forjuoh, S, Reis, M, Kosmach, S, Khan, M, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf 2012;21:93–100.10.1136/bmjqs-2011-000304Search in Google Scholar PubMed PubMed Central
22. Hofer, TP, Kerr, EA, Hayward, RA. What is an error? Eff Clin Pract 2000;3:261–9. PMID:11151522.Search in Google Scholar
24. Singh, H, Giardina, TD, Meyer, AN, Forjuoh, SN, Reis, MD, Thomas, EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med 2013;173:418–25.10.1001/jamainternmed.2013.2777Search in Google Scholar PubMed PubMed Central
25. Sittig, DF, Singh, H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010;19:i68–74.10.1136/qshc.2010.042085Search in Google Scholar PubMed PubMed Central
26. Henriksen, K, Brady, J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013;22:ii1–5.10.1136/bmjqs-2013-001827Search in Google Scholar PubMed PubMed Central
27. Dhaliwal, G. Web Exclusives. Annals for Hospitalists inpatient notes - diagnostic excellence starts with an incessant watch. Ann Intern Med 2017;167:HO2–3.10.7326/M17-2447Search in Google Scholar PubMed
28. Lane, KP, Chia, C, Lessing, JN, Limes, J, Mathews, B, Schaefer, J, et al. Improving resident feedback on diagnostic reasoning after handovers: the LOOP Project. J Hosp Med 2019;14:622–5.10.12788/jhm.3262Search in Google Scholar PubMed
29. Shenvi, EC, Feupe, SF, Yang, H, El-Kareh, R. “Closing the loop”: a mixed-methods study about resident learning from outcome feedback after patient handoffs. Diagnosis (Berl). 2018;5:235–42.10.1515/dx-2018-0013Search in Google Scholar PubMed PubMed Central
31. Solberg, LI, Mosser, G, McDonald, S. The three faces of performance measurement: improvement, accountability, and research. Jt Comm J Qual Improv 1997;23:135–47.10.1016/S1070-3241(16)30305-4Search in Google Scholar
33. Singh, H, Upadhyay, DK, Torretti, D. Developing health care organizations that pursue learning and exploration of diagnostic excellence: an action plan. Acad Med 2019. Available from: https://www.ncbi.nlm.nih.gov/pubmed/31688035 [Accessed 1 Apr 2020].10.1097/ACM.0000000000003062Search in Google Scholar
34. Amelung, D, Whitaker, KL, Lennard, D, Ogden, M, Sheringham, J, Zhou, Y, et al. Influence of doctor-patient conversations on behaviours of patients presenting to primary care with new or persistent symptoms: a video observation study. BMJ Qual Saf 2019. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7057803/ [Accessed 1 Apr 2020].10.1136/bmjqs-2019-009485Search in Google Scholar
35. Murphy, DR, Meyer, AN, Sittig, DF, Meeks, DW, Thomas, EJ, Singh, H. Application of electronic trigger tools to identify targets for improving diagnostic safety. BMJ Qual Saf 2019;28:151–9.10.1136/bmjqs-2018-008086Search in Google Scholar
36. Institute for Healthcare Improvement and Salzburg Global Seminar. The Salzburg statement on moving measurement into action: global principles for measuring patient safety; 2019. Available from: https://www.salzburgglobal.org/fileadmin/user_upload/Documents/2010-2019/2019/Session_622/SalzburgGlobal_Statement_622_Patient_Safety_01.pdf [Accessed 31 Mar 2020].Search in Google Scholar
37. Smith, MW, Davis, GT, Murphy, DR, Laxmisan, A, Singh, H. Resilient actions in the diagnostic process and system performance. BMJ Qual Saf 2013;22:1006–13.10.1136/bmjqs-2012-001661Search in Google Scholar
38. Murff, HJ, Patel, VL, Hripcsak, G, Bates, DW. Detecting adverse events for patient safety research: a review of current methodologies. J Biomed Inform 2003;36:131–43.10.1016/j.jbi.2003.08.003Search in Google Scholar
40. Rosen, AK. Are we getting better at measuring patient safety? Perspectives on safety; 2010. Available from: https://psnet.ahrq.gov/perspective/are-we-getting-better-measuring-patient-safety [Accessed 1 Apr 2020].Search in Google Scholar
41. Levtzion-Korach, O, Frankel, A, Alcalai, H, Keohane, C, Orav, J, Graydon-Baker, E, et al. Integrating incident data from five reporting systems to assess patient safety: making sense of the elephant. Jt Comm J Qual Patient Saf 2010;36:402–10.10.1016/S1553-7250(10)36059-4Search in Google Scholar
42. Meeks, DW, Meyer, AN, Rose, B, Walker, YN, Singh, H. Exploring new avenues to assess the sharp end of patient safety: an analysis of nationally aggregated peer review data. BMJ Qual Saf 2014;23:1023–30.10.1136/bmjqs-2014-003239Search in Google Scholar PubMed
43. Tejerina, EE, Padilla, R, Abril, E, Frutos-Vivar, F, Ballen, A, Rodriguez-Barbero, JM, et al. Autopsy-detected diagnostic errors over time in the intensive care unit. Hum Pathol 2018;76:85–90.10.1016/j.humpath.2018.02.025Search in Google Scholar PubMed
44. Hautz, WE, Kammer, JE, Hautz, SC, Sauter, TC, Zwaan, L, Exadaktylos, AK, et al. Diagnostic error increases mortality and length of hospital stay in patients presenting through the emergency room. Scand J Trauma Resusc Emerg Med 2019;27:54.10.1186/s13049-019-0629-zSearch in Google Scholar PubMed PubMed Central
45. Gov-Ari, E, Leann Hopewell, B. Correlation between pre-operative diagnosis and post-operative pathology reading in pediatric neck masses--a review of 281 cases. Int J Pediatr Otorhinolaryngol 2015;79:2–7.10.1016/j.ijporl.2014.11.011Search in Google Scholar PubMed
46. Bhise, V, Sittig, DF, Vaghani, V, Wei, L, Baldwin, J, Singh, H. An electronic trigger based on care escalation to identify preventable adverse events in hospitalised patients. BMJ Qual Saf 2018;27:241–6.10.1136/bmjqs-2017-006975Search in Google Scholar PubMed PubMed Central
47. Davalos, MC, Samuels, K, Meyer, AN, Thammasitboon, S, Sur, M, Roy, K, et al. Finding diagnostic errors in children admitted to the PICU. Pediatr Crit Care Med 2017;18:265–71.10.1097/PCC.0000000000001059Search in Google Scholar PubMed
48. Schiff, GD, Puopolo, AL, Huben-Kearney, A, Yu, W, Keohane, C, McDonough, P, et al. Primary care closed claims experience of Massachusetts malpractice insurers. JAMA Intern Med 2013;173:2063–8.10.1001/jamainternmed.2013.11070Search in Google Scholar PubMed
49. Gandhi, TK, Kachalia, A, Thomas, EJ, Puopolo, AL, Yoon, C, Brennan, TA, et al. Missed and delayed diagnoses in the ambulatory setting: a study of closed malpractice claims. Ann Intern Med 2006;145:488–96.10.7326/0003-4819-145-7-200610030-00006Search in Google Scholar PubMed
50. Kachalia, A, Gandhi, TK, Puopolo, AL, Yoon, C, Thomas, EJ, Griffey, R, et al. Missed and delayed diagnoses in the emergency department: a study of closed malpractice claims from 4 liability insurers. Ann Emerg Med 2007;49:196–205.10.1016/j.annemergmed.2006.06.035Search in Google Scholar PubMed
51. Singh, H, Thomas, EJ, Petersen, LA, Studdert, DM. Medical errors involving trainees: a study of closed malpractice claims from 5 insurers. Arch Intern Med 2007;167:2030–6.10.1001/archinte.167.19.2030Search in Google Scholar PubMed
52. Okafor, N, Payne, VL, Chathampally, Y, Miller, S, Doshi, P, Singh, H. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine. Emerg Med J 2015. Available from: https://www.ncbi.nlm.nih.gov/pubmed/26531860 [Accessed 2 Apr 2020].10.1136/emermed-2014-204604Search in Google Scholar PubMed
54. Shojania, K. The elephant of patient safety: what you see depends on how you look. Jt Comm J Qual Patient Saf. 2010;36:399–401. https://doi.org/10.1016/s1553-7250(10)36058-2.Search in Google Scholar
55. Ward, JK, Armitage, G. Can patients report patient safety incidents in a hospital setting? A systematic review. BMJ Qual Saf 2012;21:685–99.10.1136/bmjqs-2011-000213Search in Google Scholar PubMed
56. Scott, J, Heavey, E, Waring, J, De Brun, A, Dawson, P. Implementing a survey for patients to provide safety experience feedback following a care transition: a feasibility study. BMC Health Serv Res 2019;19:613.10.1186/s12913-019-4447-9Search in Google Scholar PubMed PubMed Central
57. Haroutunian, P, Alsabri, M, Kerdiles, FJ, Adel Ahmed Abdullah, H, Bellou, A. Analysis of factors and medical errors involved in patient complaints in a European emergency department. Adv J Emerg Med 2018;2:e4.Search in Google Scholar
58. Gillespie, A, Reader, TW. Patient-centered insights: using health care complaints to reveal hot spots and blind spots in quality and safety. Milbank Q 2018;96:530–67.10.1111/1468-0009.12338Search in Google Scholar PubMed PubMed Central
59. Longtin, Y, Sax, H, Leape, LL, Sheridan, SE, Donaldson, L, Pittet, D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85:53–62.10.4065/mcp.2009.0248Search in Google Scholar PubMed PubMed Central
60. Weingart, SN, Toth, M, Eneman, J, Aronson, MD, Sands, DZ, Ship, AN, et al. Lessons from a patient partnership intervention to prevent adverse drug events. Int J Qual Health Care 2004;16:499–507.10.1093/intqhc/mzh083Search in Google Scholar PubMed
61. Giardina, TD, Haskell, H, Menon, S, Hallisy, J, Southwick, FS, Sarkar, U, et al. Learning from patients’ experiences related to diagnostic errors is essential for progress in patient safety. Health Aff 2018;37:1821–7.10.1377/hlthaff.2018.0698Search in Google Scholar PubMed PubMed Central
62. Smith, K, Baker, K, Wesley, D, Zipperer, L, Clark, MD. Guide to improving patient safety in primary care settings by engaging patients and families: environmental scan report. (Prepared by: MedStar Health Research Institute under Contract No. HHSP233201500022I/HHSP23337002T). Agency for Healthcare Research and Quality. AHRQ Publication: Rockville, MD. No. 17-0021-2-EF; 2017.Search in Google Scholar
63. Huerta, TR, Walker, C, Murray, KR, Hefner, JL, McAlearney, AS, Moffatt-Bruce, S. Patient safety errors: leveraging health information technology to facilitate patient reporting. J Healthc Qual 2016;38:17–23.10.1097/JHQ.0000000000000022Search in Google Scholar PubMed
64. Liberman, AL, Newman-Toker, DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf 2018;27:557–66.10.1136/bmjqs-2017-007032Search in Google Scholar PubMed PubMed Central
65. Michelson, KA, Buchhalter, LC, Bachur, RG, Mahajan, P, Monuteaux, MC, Finkelstein, JA. Accuracy of automated identification of delayed diagnosis of pediatric appendicitis and sepsis in the ED. Emerg Med J 2019;36:736–40.10.1136/emermed-2019-208841Search in Google Scholar PubMed
66. Mahajan, P, Basu, T, Pai, CW, Singh, H, Petersen, N, Bellolio, MF, et al. Factors associated with potentially missed diagnosis of appendicitis in the emergency department. JAMA Netw Open 2020;3:e200612.10.1001/jamanetworkopen.2020.0612Search in Google Scholar PubMed PubMed Central
67. Newman-Toker, DE, Moy, E, Valente, E, Coffey, R, Hines, AL. Missed diagnosis of stroke in the emergency department: a cross-sectional analysis of a large population-based sample. Diagnosis (Berl). 2014;1:155–66.10.1515/dx-2013-0038Search in Google Scholar PubMed PubMed Central
68. Southern, DA, Burnand, B, Droesler, SE, Flemons, W, Forster, AJ, Gurevich, Y, et al. Deriving ICD-10 codes for Patient Safety Indicators for large-scale surveillance using administrative hospital data. Med Care 2017;55:252–60.10.1097/MLR.0000000000000649Search in Google Scholar PubMed
69. Miller, MR, Elixhauser, A, Zhan, C, Meyer, GS. Patient Safety Indicators: using administrative data to identify potential patient safety concerns. Health Serv Res 2001;36:110–32.Search in Google Scholar
70. Bhise, V, Meyer, AND, Singh, H, Wei, L, Russo, E, Al-Mutairi, A, et al. Errors in diagnosis of spinal epidural abscesses in the era of electronic health records. Am J Med 2017;130:975–81.10.1016/j.amjmed.2017.03.009Search in Google Scholar PubMed
71. Singh, H, Daci, K, Petersen, L, Collins, C, Petersen, N, Shethia, A, et al. Missed opportunities to initiate endoscopic evaluation for colorectal cancer diagnosis. Am J Gastroenterol 2009;104:2543–54.10.1038/ajg.2009.324Search in Google Scholar PubMed PubMed Central
72. Agency for Healthcare Research and Quality. Triggers and targeted injury detection systems (TIDS) expert panel meeting: conference summary report. AHRQ Pub: Rockville, MD. No. 090003; 2009. Available from: https://psnet.ahrq.gov/issue/triggers-and-targeted-injury-detection-systems-tids-expert-panel-meeting-conference-summary [Accessed 2 Apr 2020].Search in Google Scholar
73. Mull, HJ, Nebeker, JR, Shimada, SL, Kaafarani, HM, Rivard, PE, Rosen, AK. Consensus building for development of outpatient adverse drug event triggers. J Patient Saf 2011;7:66–71.10.1097/PTS.0b013e31820c98baSearch in Google Scholar PubMed PubMed Central
74. Szekendi, MK, Sullivan, C, Bobb, A, Feinglass, J, Rooney, D, Barnard, C, et al. Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care 2006;15:184–90.10.1136/qshc.2005.014589Search in Google Scholar PubMed PubMed Central
75. Agency for Healthcare Research and Quality. Patient safety primer. Triggers and trigger tools; 2019. Available from: https://psnet.ahrq.gov/primer/triggers-and-trigger-tools [Accessed 2 Apr 2020].Search in Google Scholar
76. Shenvi, EC, El-Kareh, R. Clinical criteria to screen for inpatient diagnostic errors: a scoping review. Diagnosis (Berl). 2015;2:3–19.10.1515/dx-2014-0047Search in Google Scholar PubMed PubMed Central
77. Singh, H, Thomas, EJ, Khan, MM, Petersen, LA. Identifying diagnostic errors in primary care using an electronic screening algorithm. Arch Intern Med 2007;167:302–8.10.1001/archinte.167.3.302Search in Google Scholar PubMed
78. Murphy, DR, Laxmisan, A, Reis, BA, Thomas, EJ, Esquivel, A, Forjuoh, SN, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf 2014;23:8–16.10.1136/bmjqs-2013-001874Search in Google Scholar PubMed
79. Classen, DC, Pestotnik, SL, Evans, RS, Burke, JP. Computerized surveillance of adverse drug events in hospital patients. J Am Med Assoc 1991;266:2847–51.10.1001/jama.1991.03470200059035Search in Google Scholar
80. Classen, DC, Resar, R, Griffin, F, Federico, F, Frankel, T, Kimmel, N, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff 2011;30:581–9.10.1377/hlthaff.2011.0190Search in Google Scholar PubMed
81. Doupi, P SH, Bjørn, B, Deilkås, E, Nylén, U, Rutberg, H. Use of the Global Trigger Tool in patient safety improvement efforts: nordic experiences. Cogn Technol Work 2015;17:45–54.10.1007/s10111-014-0302-2Search in Google Scholar
82. Classen, D, Li, M, Miller, S, Ladner, D. An electronic health record-based real-time analytics program for patient safety surveillance and improvement. Health Aff 2018;37:1805–12.10.1377/hlthaff.2018.0728Search in Google Scholar PubMed
83. Danforth, KN, Smith, AE, Loo, RK, Jacobsen, SJ, Mittman, BS, Kanter, MH. Electronic clinical surveillance to improve outpatient care: diverse applications within an integrated delivery system. eGEMs 2014;2:1056.10.13063/2327-9214.1056Search in Google Scholar PubMed PubMed Central
84. Murphy, DR, Meyer, AN, Vaghani, V, Russo, E, Sittig, DF, Richards, KA, et al. Application of electronic algorithms to improve diagnostic evaluation for bladder cancer. Appl Clin Inform 2017;8:279–90.10.4338/ACI-2016-10-RA-0176Search in Google Scholar PubMed PubMed Central
85. Murphy, DR, Wu, L, Thomas, EJ, Forjuoh, SN, Meyer, AN, Singh, H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol 2015;33:3560–7.10.1200/JCO.2015.61.1301Search in Google Scholar PubMed PubMed Central
86. Murphy, DR, Thomas, EJ, Meyer, AN, Singh, H. Development and validation of electronic health record-based triggers to detect delays in follow-up of abnormal lung imaging findings. Radiology 2015:142530.10.1148/radiol.2015142530Search in Google Scholar PubMed PubMed Central
87. Singh, H, Hirani, K, Kadiyala, H, Rudomiotov, O, Davis, T, Khan, MM, et al. Characteristics and predictors of missed opportunities in lung cancer diagnosis: an electronic health record-based study. J Clin Oncol 2010;28:3307–15.10.1200/JCO.2009.25.6636Search in Google Scholar PubMed PubMed Central
88. National Quality Forum. Improving diagnostic quality and safety final report. (Developed under department of health and human Services Contract HHSM-500-2012-00009I, task order HHSM-500-t0026). NQF: Washington, DC; 2017. Available from: http://www.qualityforum.org/Publications/2017/09/Improving_Diagnostic_Quality_and_Safety_Final_Report.aspx [Accessed 2 Apr 2020].Search in Google Scholar
89. Young, IJB, Luz, S, Lone, N. A systematic review of natural language processing for classification tasks in the field of incident reporting and adverse event analysis. Int J Med Inform 2019;132:103971.10.1016/j.ijmedinf.2019.103971Search in Google Scholar PubMed
90. Melton, GB, Hripcsak, G. Automated detection of adverse events using natural language processing of discharge summaries. J Am Med Inform Assoc 2005;12:448–57.10.1197/jamia.M1794Search in Google Scholar PubMed PubMed Central
91. Fong, A, Harriott, N, Walters, DM, Foley, H, Morrissey, R, Ratwani, RR. Integrating natural language processing expertise with patient safety event review committees to improve the analysis of medication events. Int J Med Inform 2017;104:120–5.10.1016/j.ijmedinf.2017.05.005Search in Google Scholar PubMed
92. Jagannatha, A, Liu, F, Liu, W, Yu, H. Overview of the first natural language processing challenge for extracting medication, indication, and adverse drug events from electronic health record notes (MADE 1.0). Drug Saf 2019;42:99–111.10.1007/s40264-018-0762-zSearch in Google Scholar PubMed PubMed Central
93. Singh, H, Graber, M, Onakpoya, I, Schiff, GD, Thompson, MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf 2016;26:484–94.10.1136/bmjqs-2016-005401Search in Google Scholar PubMed PubMed Central
94. Al-Mutairi, A, Meyer, AN, Thomas, EJ, Etchegaray, JM, Roy, KM, Davalos, MC, et al. Accuracy of the Safer Dx Instrument to identify diagnostic errors in primary care. J Gen Intern Med 2016;31:602–8.10.1007/s11606-016-3601-xSearch in Google Scholar PubMed PubMed Central
95. Cifra, CL, Ten Eyck, P, Dawson, JD, Reisinger, HS, Singh, H, Herwaldt, LA. Factors associated with diagnostic error on admission to a PICU: a pilot study. Pediatr Crit Care Med 2020. Available from: https://www.ncbi.nlm.nih.gov/pubmed/32097247 [Accessed 2 Apr 2020].10.1097/PCC.0000000000002257Search in Google Scholar PubMed PubMed Central
96. Bergl, PA, Taneja, A, El-Kareh, R, Singh, H, Nanchal, RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med 2019;47:e902–10.10.1097/CCM.0000000000003976Search in Google Scholar PubMed
97. Singh, H, Khanna, A, Spitzmueller, C, Meyer, A. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6:315–23.10.1515/dx-2019-0012Search in Google Scholar PubMed
98. Mathews, BK, Fredrickson, M, Sebasky, M, Seymann, G, Ramamoorthy, S, Vilke, G, et al. Structured case reviews for organizational learning about diagnostic vulnerabilities: initial experiences from two medical centers. Diagnosis (Berl). 2020;7:27–35.10.1515/dx-2019-0032Search in Google Scholar PubMed
99. Schiff, GD, Hasan, O, Kim, S, Abrams, R, Cosby, K, Lambert, BL, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009;169:1881–7.10.1001/archinternmed.2009.333Search in Google Scholar PubMed
100. Reilly, JB, Myers, JS, Salvador, D, Trowbridge, RL. Use of a novel, modified fishbone diagram to analyze diagnostic errors. Diagnosis (Berl). 2014;1:167–71.10.1515/dx-2013-0040Search in Google Scholar PubMed
101. Graber, ML, Rencic, J, Rusz, D, Papa, F, Croskerry, P, Zierler, B, et al. Improving diagnosis by improving education: a policy brief on education in healthcare professions. Diagnosis (Berl). 2018;5:107–18. https://doi.org/10.1515/dx-2018-0033.Search in Google Scholar PubMed
102. Henriksen, K, Dymek, C, Harrison, MI, Brady, PJ, Arnold, SB. Challenges and opportunities from the Agency for Healthcare Research and Quality (AHRQ) research summit on improving diagnosis: a proceedings review. Diagnosis (Berl). 2017;4:57-66.10.1515/dx-2017-0016Search in Google Scholar PubMed
103. Ohio Hospital Association. OPSI offers free diagnostic errors webinar on Sept. 19; 2018. Available from: https://ohiohospitals.org/News-Publications/Subscriptions/Member-Newsletters/HIINformation/HIINformation/OPSI-Offers-Free-Diagnostic-Errors-Webinar-on-Sept [Accessed 2 Apr 2020].Search in Google Scholar
104. Medford-Davis, L, Park, E, Shlamovitz, G, Suliburk, J, Meyer, AN, Singh, H. Diagnostic errors related to acute abdominal pain in the emergency department. Emerg Med J 2016;33:253–9.10.1136/emermed-2015-204754Search in Google Scholar PubMed
106. Sittig, DF, Singh, H. Toward more proactive approaches to safety in the electronic health record era. Jt Comm J Qual Patient Saf 2017;43:540–7. https://doi.org/10.1016/j.jcjq.2017.06.005.Search in Google Scholar PubMed PubMed Central
107. Institute for Healthcare Improvement/National Patient Safety Foundation. Closing the loop: a guide to safer ambulatory referrals in the EHR era. IHI: Cambridge, MA; 2017. Available from: http://www.ihi.org/resources/Pages/Publications/Closing-the-Loop-A-Guide-to-Safer-Ambulatory-Referrals.aspx [Accessed 2 Apr 2020].Search in Google Scholar
109. Bates, DW, Singh, H. Two decades since to Err Is Human: an assessment of progress and emerging priorities in patient safety. Health Aff 2018;37. Available from: https://www.healthaffairs.org/doi/full/10.1377/hlthaff.2018.0738?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed [Accessed 2 Apr 2020].10.1377/hlthaff.2018.0738Search in Google Scholar PubMed
110. Meyer, AND, Singh, H. The path to diagnostic excellence includes feedback to calibrate how clinicians think. J Am Med Assoc 2019;321:737–8. https://doi.org/10.1001/jama.2019.0113.Search in Google Scholar PubMed
111. Meyer, AN, Payne, VL, Meeks, DW, Rao, R, Singh, H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med 2013;173:1952–8.10.1001/jamainternmed.2013.10081Search in Google Scholar PubMed
112. Schiff, GD, Martin, SA, Eidelman, DH, Volk, LA, Ruan, E, Cassel, C, et al. Ten principles for more conservative, care-full diagnosis. Ann Intern Med 2018;169:643–5.10.7326/M18-1468Search in Google Scholar
113. Cifra, CL, Jones, KL, Ascenzi, JA, Bhalala, US, Bembea, MM, Newman-Toker, DE, et al. Diagnostic errors in a PICU: insights from the morbidity and mortality conference. Pediatr Crit Care Med 2015;16:468–76.10.1097/PCC.0000000000000398Search in Google Scholar
114. Gupta, A, Snyder, A, Kachalia, A, Flanders, S, Saint, S, Chopra, V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf 2017;27. Available from: https://www.ncbi.nlm.nih.gov/pubmed/28794243 [Accessed 2 Apr 2020].10.1136/bmjqs-2017-006774Search in Google Scholar
115. Shojania, KG, Burton, EC, McDonald, KM, Goldman, L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. J Am Med Assoc 2003;289:2849–56.10.1001/jama.289.21.2849Search in Google Scholar
116. Walton, MM, Harrison, R, Kelly, P, Smith-Merry, J, Manias, E, Jorm, C, et al. Patients’ reports of adverse events: a data linkage study of Australian adults aged 45 years and over. BMJ Qual Saf 2017;26:743–50.10.1136/bmjqs-2016-006339Search in Google Scholar
117. Fowler, FJJr, Epstein, A, Weingart, SN, Annas, CL, Bolcic-Jankovic, D, Clarridge, B, et al. Adverse events during //hospitalization: results of a patient survey. Jt Comm J Qual Patient Saf 2008;34:583–90.10.1016/S1553-7250(08)34073-2Search in Google Scholar
© 2020 Walter de Gruyter GmbH, Berlin/Boston